Test Report: KVM_Linux 21654

                    
                      97b08743b22103f17d212d55fa2b3870ea2b5366:2025-09-29:41682
                    
                

Test fail (8/345)

Order failed test Duration
29 TestAddons/serial/Volcano 374.11
37 TestAddons/parallel/Ingress 492.04
41 TestAddons/parallel/CSI 373.82
44 TestAddons/parallel/LocalPath 345.26
46 TestAddons/parallel/Yakd 128.26
91 TestFunctional/parallel/DashboardCmd 302.04
100 TestFunctional/parallel/PersistentVolumeClaim 370
104 TestFunctional/parallel/MySQL 602.49
x
+
TestAddons/serial/Volcano (374.11s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:884: volcano-controller stabilized in 23.009054ms
addons_test.go:876: volcano-admission stabilized in 25.888369ms
addons_test.go:868: volcano-scheduler stabilized in 26.198363ms
addons_test.go:890: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-scheduler-799f64f894-jnn6h" [6f384a13-5a13-40d7-bedc-aaf02b7cc343] Pending / Ready:ContainersNotReady (containers with unready status: [volcano-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [volcano-scheduler])
helpers_test.go:337: TestAddons/serial/Volcano: WARNING: pod list for "volcano-system" "app=volcano-scheduler" returned: client rate limiter Wait returned an error: context deadline exceeded
addons_test.go:890: ***** TestAddons/serial/Volcano: pod "app=volcano-scheduler" failed to start within 6m0s: context deadline exceeded ****
addons_test.go:890: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-214441 -n addons-214441
addons_test.go:890: TestAddons/serial/Volcano: showing logs for failed pods as of 2025-09-29 11:39:19.230225966 +0000 UTC m=+545.384959640
addons_test.go:890: (dbg) Run:  kubectl --context addons-214441 describe po volcano-scheduler-799f64f894-jnn6h -n volcano-system
addons_test.go:890: (dbg) kubectl --context addons-214441 describe po volcano-scheduler-799f64f894-jnn6h -n volcano-system:
Name:                 volcano-scheduler-799f64f894-jnn6h
Namespace:            volcano-system
Priority:             2000000000
Priority Class Name:  system-cluster-critical
Service Account:      volcano-scheduler
Node:                 addons-214441/192.168.39.76
Start Time:           Mon, 29 Sep 2025 11:31:32 +0000
Labels:               app=volcano-scheduler
pod-template-hash=799f64f894
Annotations:          <none>
Status:               Pending
SeccompProfile:       RuntimeDefault
IP:                   10.244.0.19
IPs:
IP:           10.244.0.19
Controlled By:  ReplicaSet/volcano-scheduler-799f64f894
Containers:
volcano-scheduler:
Container ID:  
Image:         docker.io/volcanosh/vc-scheduler:v1.12.2@sha256:6e28f0f79d4cd09c1c34afaba41623c1b4d0fd7087cc99d6449a8a62e073b50e
Image ID:      
Port:          <none>
Host Port:     <none>
Args:
--logtostderr
--scheduler-conf=/volcano.scheduler/volcano-scheduler.conf
--enable-healthz=true
--enable-metrics=true
--leader-elect=false
--kube-api-qps=2000
--kube-api-burst=2000
--schedule-period=1s
--node-worker-threads=20
-v=3
2>&1
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:
DEBUG_SOCKET_DIR:  /tmp/klog-socks
Mounts:
/tmp/klog-socks from klog-sock (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-9p5ql (ro)
/volcano.scheduler from scheduler-config (rw)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
scheduler-config:
Type:      ConfigMap (a volume populated by a ConfigMap)
Name:      volcano-scheduler-configmap
Optional:  false
klog-sock:
Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:     
SizeLimit:  <unset>
kube-api-access-9p5ql:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                    From               Message
----     ------     ----                   ----               -------
Normal   Scheduled  7m47s                  default-scheduler  Successfully assigned volcano-system/volcano-scheduler-799f64f894-jnn6h to addons-214441
Normal   Pulling    3m54s (x5 over 7m43s)  kubelet            Pulling image "docker.io/volcanosh/vc-scheduler:v1.12.2@sha256:6e28f0f79d4cd09c1c34afaba41623c1b4d0fd7087cc99d6449a8a62e073b50e"
Warning  Failed     3m54s (x5 over 6m51s)  kubelet            Failed to pull image "docker.io/volcanosh/vc-scheduler:v1.12.2@sha256:6e28f0f79d4cd09c1c34afaba41623c1b4d0fd7087cc99d6449a8a62e073b50e": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     3m54s (x5 over 6m51s)  kubelet            Error: ErrImagePull
Normal   BackOff    103s (x21 over 6m50s)  kubelet            Back-off pulling image "docker.io/volcanosh/vc-scheduler:v1.12.2@sha256:6e28f0f79d4cd09c1c34afaba41623c1b4d0fd7087cc99d6449a8a62e073b50e"
Warning  Failed     103s (x21 over 6m50s)  kubelet            Error: ImagePullBackOff
addons_test.go:890: (dbg) Run:  kubectl --context addons-214441 logs volcano-scheduler-799f64f894-jnn6h -n volcano-system
addons_test.go:890: (dbg) Non-zero exit: kubectl --context addons-214441 logs volcano-scheduler-799f64f894-jnn6h -n volcano-system: exit status 1 (76.274953ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "volcano-scheduler" in pod "volcano-scheduler-799f64f894-jnn6h" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
addons_test.go:890: kubectl --context addons-214441 logs volcano-scheduler-799f64f894-jnn6h -n volcano-system: exit status 1
addons_test.go:891: failed waiting for app=volcano-scheduler pod: app=volcano-scheduler within 6m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/serial/Volcano]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-214441 -n addons-214441
helpers_test.go:252: <<< TestAddons/serial/Volcano FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/serial/Volcano]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-214441 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-214441 logs -n 25: (1.187153922s)
helpers_test.go:260: TestAddons/serial/Volcano logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-383930 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=kvm2  --auto-update-drivers=false                                                                                                                                                                                                                                                                                              │ download-only-383930 │ jenkins │ v1.37.0 │ 29 Sep 25 11:30 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                     │ minikube             │ jenkins │ v1.37.0 │ 29 Sep 25 11:30 UTC │ 29 Sep 25 11:30 UTC │
	│ delete  │ -p download-only-383930                                                                                                                                                                                                                                                                                                                                                                                                                                                   │ download-only-383930 │ jenkins │ v1.37.0 │ 29 Sep 25 11:30 UTC │ 29 Sep 25 11:30 UTC │
	│ start   │ -o=json --download-only -p download-only-221115 --force --alsologtostderr --kubernetes-version=v1.34.0 --container-runtime=docker --driver=kvm2  --auto-update-drivers=false                                                                                                                                                                                                                                                                                              │ download-only-221115 │ jenkins │ v1.37.0 │ 29 Sep 25 11:30 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                     │ minikube             │ jenkins │ v1.37.0 │ 29 Sep 25 11:30 UTC │ 29 Sep 25 11:30 UTC │
	│ delete  │ -p download-only-221115                                                                                                                                                                                                                                                                                                                                                                                                                                                   │ download-only-221115 │ jenkins │ v1.37.0 │ 29 Sep 25 11:30 UTC │ 29 Sep 25 11:30 UTC │
	│ delete  │ -p download-only-383930                                                                                                                                                                                                                                                                                                                                                                                                                                                   │ download-only-383930 │ jenkins │ v1.37.0 │ 29 Sep 25 11:30 UTC │ 29 Sep 25 11:30 UTC │
	│ delete  │ -p download-only-221115                                                                                                                                                                                                                                                                                                                                                                                                                                                   │ download-only-221115 │ jenkins │ v1.37.0 │ 29 Sep 25 11:30 UTC │ 29 Sep 25 11:30 UTC │
	│ start   │ --download-only -p binary-mirror-005122 --alsologtostderr --binary-mirror http://127.0.0.1:35607 --driver=kvm2  --auto-update-drivers=false                                                                                                                                                                                                                                                                                                                               │ binary-mirror-005122 │ jenkins │ v1.37.0 │ 29 Sep 25 11:30 UTC │                     │
	│ delete  │ -p binary-mirror-005122                                                                                                                                                                                                                                                                                                                                                                                                                                                   │ binary-mirror-005122 │ jenkins │ v1.37.0 │ 29 Sep 25 11:30 UTC │ 29 Sep 25 11:30 UTC │
	│ addons  │ disable dashboard -p addons-214441                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-214441        │ jenkins │ v1.37.0 │ 29 Sep 25 11:30 UTC │                     │
	│ addons  │ enable dashboard -p addons-214441                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-214441        │ jenkins │ v1.37.0 │ 29 Sep 25 11:30 UTC │                     │
	│ start   │ -p addons-214441 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --auto-update-drivers=false --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-214441        │ jenkins │ v1.37.0 │ 29 Sep 25 11:30 UTC │ 29 Sep 25 11:33 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/29 11:30:26
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0929 11:30:26.464374  595895 out.go:360] Setting OutFile to fd 1 ...
	I0929 11:30:26.464481  595895 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 11:30:26.464487  595895 out.go:374] Setting ErrFile to fd 2...
	I0929 11:30:26.464493  595895 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 11:30:26.464787  595895 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21654-591397/.minikube/bin
	I0929 11:30:26.465454  595895 out.go:368] Setting JSON to false
	I0929 11:30:26.466447  595895 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":4374,"bootTime":1759141052,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0929 11:30:26.466553  595895 start.go:140] virtualization: kvm guest
	I0929 11:30:26.468688  595895 out.go:179] * [addons-214441] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0929 11:30:26.470181  595895 out.go:179]   - MINIKUBE_LOCATION=21654
	I0929 11:30:26.470220  595895 notify.go:220] Checking for updates...
	I0929 11:30:26.473145  595895 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0929 11:30:26.474634  595895 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21654-591397/kubeconfig
	I0929 11:30:26.475793  595895 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21654-591397/.minikube
	I0929 11:30:26.477353  595895 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0929 11:30:26.478534  595895 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0929 11:30:26.479959  595895 driver.go:421] Setting default libvirt URI to qemu:///system
	I0929 11:30:26.513451  595895 out.go:179] * Using the kvm2 driver based on user configuration
	I0929 11:30:26.514622  595895 start.go:304] selected driver: kvm2
	I0929 11:30:26.514644  595895 start.go:924] validating driver "kvm2" against <nil>
	I0929 11:30:26.514659  595895 start.go:935] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0929 11:30:26.515675  595895 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 11:30:26.515785  595895 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21654-591397/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0929 11:30:26.530531  595895 install.go:163] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I0929 11:30:26.530568  595895 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21654-591397/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0929 11:30:26.545187  595895 install.go:163] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I0929 11:30:26.545244  595895 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0929 11:30:26.545491  595895 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0929 11:30:26.545527  595895 cni.go:84] Creating CNI manager for ""
	I0929 11:30:26.545570  595895 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0929 11:30:26.545579  595895 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0929 11:30:26.545628  595895 start.go:348] cluster config:
	{Name:addons-214441 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-214441 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: Network
Plugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0
s}
	I0929 11:30:26.545714  595895 iso.go:125] acquiring lock: {Name:mk3bf2644aacab696b9f4d566a6d81a30d75b71a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 11:30:26.547400  595895 out.go:179] * Starting "addons-214441" primary control-plane node in "addons-214441" cluster
	I0929 11:30:26.548855  595895 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0929 11:30:26.548909  595895 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21654-591397/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4
	I0929 11:30:26.548918  595895 cache.go:58] Caching tarball of preloaded images
	I0929 11:30:26.549035  595895 preload.go:172] Found /home/jenkins/minikube-integration/21654-591397/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0929 11:30:26.549046  595895 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on docker
	I0929 11:30:26.549389  595895 profile.go:143] Saving config to /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/addons-214441/config.json ...
	I0929 11:30:26.549415  595895 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/addons-214441/config.json: {Name:mka28e9e486990f30eb3eb321797c26d13a435f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 11:30:26.549559  595895 start.go:360] acquireMachinesLock for addons-214441: {Name:mka3370f06ebed6e47b43729e748683065f344f5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0929 11:30:26.549614  595895 start.go:364] duration metric: took 40.43µs to acquireMachinesLock for "addons-214441"
	I0929 11:30:26.549633  595895 start.go:93] Provisioning new machine with config: &{Name:addons-214441 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 Clu
sterName:addons-214441 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizatio
ns:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0929 11:30:26.549681  595895 start.go:125] createHost starting for "" (driver="kvm2")
	I0929 11:30:26.551210  595895 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I0929 11:30:26.551360  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:30:26.551417  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:30:26.564991  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37291
	I0929 11:30:26.565640  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:30:26.566242  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:30:26.566262  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:30:26.566742  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:30:26.566933  595895 main.go:141] libmachine: (addons-214441) Calling .GetMachineName
	I0929 11:30:26.567150  595895 main.go:141] libmachine: (addons-214441) Calling .DriverName
	I0929 11:30:26.567316  595895 start.go:159] libmachine.API.Create for "addons-214441" (driver="kvm2")
	I0929 11:30:26.567351  595895 client.go:168] LocalClient.Create starting
	I0929 11:30:26.567402  595895 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21654-591397/.minikube/certs/ca.pem
	I0929 11:30:26.955780  595895 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21654-591397/.minikube/certs/cert.pem
	I0929 11:30:27.214636  595895 main.go:141] libmachine: Running pre-create checks...
	I0929 11:30:27.214665  595895 main.go:141] libmachine: (addons-214441) Calling .PreCreateCheck
	I0929 11:30:27.215304  595895 main.go:141] libmachine: (addons-214441) Calling .GetConfigRaw
	I0929 11:30:27.215869  595895 main.go:141] libmachine: Creating machine...
	I0929 11:30:27.215887  595895 main.go:141] libmachine: (addons-214441) Calling .Create
	I0929 11:30:27.216119  595895 main.go:141] libmachine: (addons-214441) creating domain...
	I0929 11:30:27.216141  595895 main.go:141] libmachine: (addons-214441) creating network...
	I0929 11:30:27.217698  595895 main.go:141] libmachine: (addons-214441) DBG | found existing default network
	I0929 11:30:27.217987  595895 main.go:141] libmachine: (addons-214441) DBG | <network>
	I0929 11:30:27.218041  595895 main.go:141] libmachine: (addons-214441) DBG |   <name>default</name>
	I0929 11:30:27.218077  595895 main.go:141] libmachine: (addons-214441) DBG |   <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	I0929 11:30:27.218099  595895 main.go:141] libmachine: (addons-214441) DBG |   <forward mode='nat'>
	I0929 11:30:27.218124  595895 main.go:141] libmachine: (addons-214441) DBG |     <nat>
	I0929 11:30:27.218134  595895 main.go:141] libmachine: (addons-214441) DBG |       <port start='1024' end='65535'/>
	I0929 11:30:27.218144  595895 main.go:141] libmachine: (addons-214441) DBG |     </nat>
	I0929 11:30:27.218151  595895 main.go:141] libmachine: (addons-214441) DBG |   </forward>
	I0929 11:30:27.218161  595895 main.go:141] libmachine: (addons-214441) DBG |   <bridge name='virbr0' stp='on' delay='0'/>
	I0929 11:30:27.218190  595895 main.go:141] libmachine: (addons-214441) DBG |   <mac address='52:54:00:10:a2:1d'/>
	I0929 11:30:27.218203  595895 main.go:141] libmachine: (addons-214441) DBG |   <ip address='192.168.122.1' netmask='255.255.255.0'>
	I0929 11:30:27.218212  595895 main.go:141] libmachine: (addons-214441) DBG |     <dhcp>
	I0929 11:30:27.218222  595895 main.go:141] libmachine: (addons-214441) DBG |       <range start='192.168.122.2' end='192.168.122.254'/>
	I0929 11:30:27.218232  595895 main.go:141] libmachine: (addons-214441) DBG |     </dhcp>
	I0929 11:30:27.218245  595895 main.go:141] libmachine: (addons-214441) DBG |   </ip>
	I0929 11:30:27.218256  595895 main.go:141] libmachine: (addons-214441) DBG | </network>
	I0929 11:30:27.218263  595895 main.go:141] libmachine: (addons-214441) DBG | 
	I0929 11:30:27.219018  595895 main.go:141] libmachine: (addons-214441) DBG | I0929 11:30:27.218796  595923 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000200f10}
	I0929 11:30:27.219127  595895 main.go:141] libmachine: (addons-214441) DBG | defining private network:
	I0929 11:30:27.219156  595895 main.go:141] libmachine: (addons-214441) DBG | 
	I0929 11:30:27.219168  595895 main.go:141] libmachine: (addons-214441) DBG | <network>
	I0929 11:30:27.219179  595895 main.go:141] libmachine: (addons-214441) DBG |   <name>mk-addons-214441</name>
	I0929 11:30:27.219187  595895 main.go:141] libmachine: (addons-214441) DBG |   <dns enable='no'/>
	I0929 11:30:27.219194  595895 main.go:141] libmachine: (addons-214441) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0929 11:30:27.219200  595895 main.go:141] libmachine: (addons-214441) DBG |     <dhcp>
	I0929 11:30:27.219208  595895 main.go:141] libmachine: (addons-214441) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0929 11:30:27.219214  595895 main.go:141] libmachine: (addons-214441) DBG |     </dhcp>
	I0929 11:30:27.219218  595895 main.go:141] libmachine: (addons-214441) DBG |   </ip>
	I0929 11:30:27.219224  595895 main.go:141] libmachine: (addons-214441) DBG | </network>
	I0929 11:30:27.219227  595895 main.go:141] libmachine: (addons-214441) DBG | 
	I0929 11:30:27.225021  595895 main.go:141] libmachine: (addons-214441) DBG | creating private network mk-addons-214441 192.168.39.0/24...
	I0929 11:30:27.300287  595895 main.go:141] libmachine: (addons-214441) DBG | private network mk-addons-214441 192.168.39.0/24 created
	I0929 11:30:27.300635  595895 main.go:141] libmachine: (addons-214441) DBG | <network>
	I0929 11:30:27.300651  595895 main.go:141] libmachine: (addons-214441) DBG |   <name>mk-addons-214441</name>
	I0929 11:30:27.300675  595895 main.go:141] libmachine: (addons-214441) setting up store path in /home/jenkins/minikube-integration/21654-591397/.minikube/machines/addons-214441 ...
	I0929 11:30:27.300695  595895 main.go:141] libmachine: (addons-214441) building disk image from file:///home/jenkins/minikube-integration/21654-591397/.minikube/cache/iso/amd64/minikube-v1.37.0-1758198818-20370-amd64.iso
	I0929 11:30:27.300713  595895 main.go:141] libmachine: (addons-214441) DBG |   <uuid>9d6191f7-7df6-4691-bff3-3dbacc8ac925</uuid>
	I0929 11:30:27.300719  595895 main.go:141] libmachine: (addons-214441) DBG |   <bridge name='virbr1' stp='on' delay='0'/>
	I0929 11:30:27.300726  595895 main.go:141] libmachine: (addons-214441) DBG |   <mac address='52:54:00:ff:bc:22'/>
	I0929 11:30:27.300730  595895 main.go:141] libmachine: (addons-214441) DBG |   <dns enable='no'/>
	I0929 11:30:27.300736  595895 main.go:141] libmachine: (addons-214441) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0929 11:30:27.300741  595895 main.go:141] libmachine: (addons-214441) DBG |     <dhcp>
	I0929 11:30:27.300747  595895 main.go:141] libmachine: (addons-214441) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0929 11:30:27.300754  595895 main.go:141] libmachine: (addons-214441) DBG |     </dhcp>
	I0929 11:30:27.300758  595895 main.go:141] libmachine: (addons-214441) DBG |   </ip>
	I0929 11:30:27.300763  595895 main.go:141] libmachine: (addons-214441) DBG | </network>
	I0929 11:30:27.300770  595895 main.go:141] libmachine: (addons-214441) DBG | 
	I0929 11:30:27.300780  595895 main.go:141] libmachine: (addons-214441) DBG | I0929 11:30:27.300615  595923 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/21654-591397/.minikube
	I0929 11:30:27.300970  595895 main.go:141] libmachine: (addons-214441) Downloading /home/jenkins/minikube-integration/21654-591397/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/21654-591397/.minikube/cache/iso/amd64/minikube-v1.37.0-1758198818-20370-amd64.iso...
	I0929 11:30:27.567829  595895 main.go:141] libmachine: (addons-214441) DBG | I0929 11:30:27.567633  595923 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/21654-591397/.minikube/machines/addons-214441/id_rsa...
	I0929 11:30:27.812384  595895 main.go:141] libmachine: (addons-214441) DBG | I0929 11:30:27.812174  595923 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/21654-591397/.minikube/machines/addons-214441/addons-214441.rawdisk...
	I0929 11:30:27.812428  595895 main.go:141] libmachine: (addons-214441) DBG | Writing magic tar header
	I0929 11:30:27.812454  595895 main.go:141] libmachine: (addons-214441) DBG | Writing SSH key tar header
	I0929 11:30:27.812465  595895 main.go:141] libmachine: (addons-214441) DBG | I0929 11:30:27.812330  595923 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/21654-591397/.minikube/machines/addons-214441 ...
	I0929 11:30:27.812480  595895 main.go:141] libmachine: (addons-214441) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21654-591397/.minikube/machines/addons-214441
	I0929 11:30:27.812548  595895 main.go:141] libmachine: (addons-214441) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21654-591397/.minikube/machines
	I0929 11:30:27.812584  595895 main.go:141] libmachine: (addons-214441) setting executable bit set on /home/jenkins/minikube-integration/21654-591397/.minikube/machines/addons-214441 (perms=drwx------)
	I0929 11:30:27.812594  595895 main.go:141] libmachine: (addons-214441) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21654-591397/.minikube
	I0929 11:30:27.812609  595895 main.go:141] libmachine: (addons-214441) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21654-591397
	I0929 11:30:27.812617  595895 main.go:141] libmachine: (addons-214441) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0929 11:30:27.812625  595895 main.go:141] libmachine: (addons-214441) DBG | checking permissions on dir: /home/jenkins
	I0929 11:30:27.812632  595895 main.go:141] libmachine: (addons-214441) DBG | checking permissions on dir: /home
	I0929 11:30:27.812642  595895 main.go:141] libmachine: (addons-214441) DBG | skipping /home - not owner
	I0929 11:30:27.812734  595895 main.go:141] libmachine: (addons-214441) setting executable bit set on /home/jenkins/minikube-integration/21654-591397/.minikube/machines (perms=drwxr-xr-x)
	I0929 11:30:27.812784  595895 main.go:141] libmachine: (addons-214441) setting executable bit set on /home/jenkins/minikube-integration/21654-591397/.minikube (perms=drwxr-xr-x)
	I0929 11:30:27.812829  595895 main.go:141] libmachine: (addons-214441) setting executable bit set on /home/jenkins/minikube-integration/21654-591397 (perms=drwxrwxr-x)
	I0929 11:30:27.812851  595895 main.go:141] libmachine: (addons-214441) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0929 11:30:27.812866  595895 main.go:141] libmachine: (addons-214441) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0929 11:30:27.812895  595895 main.go:141] libmachine: (addons-214441) defining domain...
	I0929 11:30:27.814169  595895 main.go:141] libmachine: (addons-214441) defining domain using XML: 
	I0929 11:30:27.814189  595895 main.go:141] libmachine: (addons-214441) <domain type='kvm'>
	I0929 11:30:27.814197  595895 main.go:141] libmachine: (addons-214441)   <name>addons-214441</name>
	I0929 11:30:27.814204  595895 main.go:141] libmachine: (addons-214441)   <memory unit='MiB'>4096</memory>
	I0929 11:30:27.814211  595895 main.go:141] libmachine: (addons-214441)   <vcpu>2</vcpu>
	I0929 11:30:27.814217  595895 main.go:141] libmachine: (addons-214441)   <features>
	I0929 11:30:27.814224  595895 main.go:141] libmachine: (addons-214441)     <acpi/>
	I0929 11:30:27.814236  595895 main.go:141] libmachine: (addons-214441)     <apic/>
	I0929 11:30:27.814260  595895 main.go:141] libmachine: (addons-214441)     <pae/>
	I0929 11:30:27.814274  595895 main.go:141] libmachine: (addons-214441)   </features>
	I0929 11:30:27.814283  595895 main.go:141] libmachine: (addons-214441)   <cpu mode='host-passthrough'>
	I0929 11:30:27.814290  595895 main.go:141] libmachine: (addons-214441)   </cpu>
	I0929 11:30:27.814300  595895 main.go:141] libmachine: (addons-214441)   <os>
	I0929 11:30:27.814310  595895 main.go:141] libmachine: (addons-214441)     <type>hvm</type>
	I0929 11:30:27.814319  595895 main.go:141] libmachine: (addons-214441)     <boot dev='cdrom'/>
	I0929 11:30:27.814323  595895 main.go:141] libmachine: (addons-214441)     <boot dev='hd'/>
	I0929 11:30:27.814331  595895 main.go:141] libmachine: (addons-214441)     <bootmenu enable='no'/>
	I0929 11:30:27.814337  595895 main.go:141] libmachine: (addons-214441)   </os>
	I0929 11:30:27.814342  595895 main.go:141] libmachine: (addons-214441)   <devices>
	I0929 11:30:27.814352  595895 main.go:141] libmachine: (addons-214441)     <disk type='file' device='cdrom'>
	I0929 11:30:27.814381  595895 main.go:141] libmachine: (addons-214441)       <source file='/home/jenkins/minikube-integration/21654-591397/.minikube/machines/addons-214441/boot2docker.iso'/>
	I0929 11:30:27.814393  595895 main.go:141] libmachine: (addons-214441)       <target dev='hdc' bus='scsi'/>
	I0929 11:30:27.814438  595895 main.go:141] libmachine: (addons-214441)       <readonly/>
	I0929 11:30:27.814469  595895 main.go:141] libmachine: (addons-214441)     </disk>
	I0929 11:30:27.814485  595895 main.go:141] libmachine: (addons-214441)     <disk type='file' device='disk'>
	I0929 11:30:27.814501  595895 main.go:141] libmachine: (addons-214441)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0929 11:30:27.814519  595895 main.go:141] libmachine: (addons-214441)       <source file='/home/jenkins/minikube-integration/21654-591397/.minikube/machines/addons-214441/addons-214441.rawdisk'/>
	I0929 11:30:27.814537  595895 main.go:141] libmachine: (addons-214441)       <target dev='hda' bus='virtio'/>
	I0929 11:30:27.814551  595895 main.go:141] libmachine: (addons-214441)     </disk>
	I0929 11:30:27.814564  595895 main.go:141] libmachine: (addons-214441)     <interface type='network'>
	I0929 11:30:27.814577  595895 main.go:141] libmachine: (addons-214441)       <source network='mk-addons-214441'/>
	I0929 11:30:27.814587  595895 main.go:141] libmachine: (addons-214441)       <model type='virtio'/>
	I0929 11:30:27.814598  595895 main.go:141] libmachine: (addons-214441)     </interface>
	I0929 11:30:27.814608  595895 main.go:141] libmachine: (addons-214441)     <interface type='network'>
	I0929 11:30:27.814616  595895 main.go:141] libmachine: (addons-214441)       <source network='default'/>
	I0929 11:30:27.814644  595895 main.go:141] libmachine: (addons-214441)       <model type='virtio'/>
	I0929 11:30:27.814658  595895 main.go:141] libmachine: (addons-214441)     </interface>
	I0929 11:30:27.814670  595895 main.go:141] libmachine: (addons-214441)     <serial type='pty'>
	I0929 11:30:27.814681  595895 main.go:141] libmachine: (addons-214441)       <target port='0'/>
	I0929 11:30:27.814692  595895 main.go:141] libmachine: (addons-214441)     </serial>
	I0929 11:30:27.814707  595895 main.go:141] libmachine: (addons-214441)     <console type='pty'>
	I0929 11:30:27.814717  595895 main.go:141] libmachine: (addons-214441)       <target type='serial' port='0'/>
	I0929 11:30:27.814725  595895 main.go:141] libmachine: (addons-214441)     </console>
	I0929 11:30:27.814732  595895 main.go:141] libmachine: (addons-214441)     <rng model='virtio'>
	I0929 11:30:27.814741  595895 main.go:141] libmachine: (addons-214441)       <backend model='random'>/dev/random</backend>
	I0929 11:30:27.814750  595895 main.go:141] libmachine: (addons-214441)     </rng>
	I0929 11:30:27.814759  595895 main.go:141] libmachine: (addons-214441)   </devices>
	I0929 11:30:27.814768  595895 main.go:141] libmachine: (addons-214441) </domain>
	I0929 11:30:27.814781  595895 main.go:141] libmachine: (addons-214441) 
	I0929 11:30:27.822484  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:b8:70:d1 in network default
	I0929 11:30:27.823310  595895 main.go:141] libmachine: (addons-214441) starting domain...
	I0929 11:30:27.823336  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:27.823353  595895 main.go:141] libmachine: (addons-214441) ensuring networks are active...
	I0929 11:30:27.824165  595895 main.go:141] libmachine: (addons-214441) Ensuring network default is active
	I0929 11:30:27.824600  595895 main.go:141] libmachine: (addons-214441) Ensuring network mk-addons-214441 is active
	I0929 11:30:27.825327  595895 main.go:141] libmachine: (addons-214441) getting domain XML...
	I0929 11:30:27.826485  595895 main.go:141] libmachine: (addons-214441) DBG | starting domain XML:
	I0929 11:30:27.826497  595895 main.go:141] libmachine: (addons-214441) DBG | <domain type='kvm'>
	I0929 11:30:27.826534  595895 main.go:141] libmachine: (addons-214441) DBG |   <name>addons-214441</name>
	I0929 11:30:27.826556  595895 main.go:141] libmachine: (addons-214441) DBG |   <uuid>44179717-3988-47cd-b8d8-61dffe58e059</uuid>
	I0929 11:30:27.826564  595895 main.go:141] libmachine: (addons-214441) DBG |   <memory unit='KiB'>4194304</memory>
	I0929 11:30:27.826573  595895 main.go:141] libmachine: (addons-214441) DBG |   <currentMemory unit='KiB'>4194304</currentMemory>
	I0929 11:30:27.826583  595895 main.go:141] libmachine: (addons-214441) DBG |   <vcpu placement='static'>2</vcpu>
	I0929 11:30:27.826594  595895 main.go:141] libmachine: (addons-214441) DBG |   <os>
	I0929 11:30:27.826603  595895 main.go:141] libmachine: (addons-214441) DBG |     <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	I0929 11:30:27.826611  595895 main.go:141] libmachine: (addons-214441) DBG |     <boot dev='cdrom'/>
	I0929 11:30:27.826619  595895 main.go:141] libmachine: (addons-214441) DBG |     <boot dev='hd'/>
	I0929 11:30:27.826627  595895 main.go:141] libmachine: (addons-214441) DBG |     <bootmenu enable='no'/>
	I0929 11:30:27.826636  595895 main.go:141] libmachine: (addons-214441) DBG |   </os>
	I0929 11:30:27.826643  595895 main.go:141] libmachine: (addons-214441) DBG |   <features>
	I0929 11:30:27.826652  595895 main.go:141] libmachine: (addons-214441) DBG |     <acpi/>
	I0929 11:30:27.826658  595895 main.go:141] libmachine: (addons-214441) DBG |     <apic/>
	I0929 11:30:27.826666  595895 main.go:141] libmachine: (addons-214441) DBG |     <pae/>
	I0929 11:30:27.826670  595895 main.go:141] libmachine: (addons-214441) DBG |   </features>
	I0929 11:30:27.826676  595895 main.go:141] libmachine: (addons-214441) DBG |   <cpu mode='host-passthrough' check='none' migratable='on'/>
	I0929 11:30:27.826680  595895 main.go:141] libmachine: (addons-214441) DBG |   <clock offset='utc'/>
	I0929 11:30:27.826712  595895 main.go:141] libmachine: (addons-214441) DBG |   <on_poweroff>destroy</on_poweroff>
	I0929 11:30:27.826730  595895 main.go:141] libmachine: (addons-214441) DBG |   <on_reboot>restart</on_reboot>
	I0929 11:30:27.826740  595895 main.go:141] libmachine: (addons-214441) DBG |   <on_crash>destroy</on_crash>
	I0929 11:30:27.826748  595895 main.go:141] libmachine: (addons-214441) DBG |   <devices>
	I0929 11:30:27.826760  595895 main.go:141] libmachine: (addons-214441) DBG |     <emulator>/usr/bin/qemu-system-x86_64</emulator>
	I0929 11:30:27.826771  595895 main.go:141] libmachine: (addons-214441) DBG |     <disk type='file' device='cdrom'>
	I0929 11:30:27.826782  595895 main.go:141] libmachine: (addons-214441) DBG |       <driver name='qemu' type='raw'/>
	I0929 11:30:27.826804  595895 main.go:141] libmachine: (addons-214441) DBG |       <source file='/home/jenkins/minikube-integration/21654-591397/.minikube/machines/addons-214441/boot2docker.iso'/>
	I0929 11:30:27.826817  595895 main.go:141] libmachine: (addons-214441) DBG |       <target dev='hdc' bus='scsi'/>
	I0929 11:30:27.826828  595895 main.go:141] libmachine: (addons-214441) DBG |       <readonly/>
	I0929 11:30:27.826842  595895 main.go:141] libmachine: (addons-214441) DBG |       <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	I0929 11:30:27.826853  595895 main.go:141] libmachine: (addons-214441) DBG |     </disk>
	I0929 11:30:27.826863  595895 main.go:141] libmachine: (addons-214441) DBG |     <disk type='file' device='disk'>
	I0929 11:30:27.826884  595895 main.go:141] libmachine: (addons-214441) DBG |       <driver name='qemu' type='raw' io='threads'/>
	I0929 11:30:27.826906  595895 main.go:141] libmachine: (addons-214441) DBG |       <source file='/home/jenkins/minikube-integration/21654-591397/.minikube/machines/addons-214441/addons-214441.rawdisk'/>
	I0929 11:30:27.826922  595895 main.go:141] libmachine: (addons-214441) DBG |       <target dev='hda' bus='virtio'/>
	I0929 11:30:27.826937  595895 main.go:141] libmachine: (addons-214441) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	I0929 11:30:27.826947  595895 main.go:141] libmachine: (addons-214441) DBG |     </disk>
	I0929 11:30:27.826959  595895 main.go:141] libmachine: (addons-214441) DBG |     <controller type='usb' index='0' model='piix3-uhci'>
	I0929 11:30:27.826972  595895 main.go:141] libmachine: (addons-214441) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	I0929 11:30:27.826984  595895 main.go:141] libmachine: (addons-214441) DBG |     </controller>
	I0929 11:30:27.827000  595895 main.go:141] libmachine: (addons-214441) DBG |     <controller type='pci' index='0' model='pci-root'/>
	I0929 11:30:27.827014  595895 main.go:141] libmachine: (addons-214441) DBG |     <controller type='scsi' index='0' model='lsilogic'>
	I0929 11:30:27.827028  595895 main.go:141] libmachine: (addons-214441) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	I0929 11:30:27.827039  595895 main.go:141] libmachine: (addons-214441) DBG |     </controller>
	I0929 11:30:27.827046  595895 main.go:141] libmachine: (addons-214441) DBG |     <interface type='network'>
	I0929 11:30:27.827053  595895 main.go:141] libmachine: (addons-214441) DBG |       <mac address='52:54:00:98:9c:d8'/>
	I0929 11:30:27.827060  595895 main.go:141] libmachine: (addons-214441) DBG |       <source network='mk-addons-214441'/>
	I0929 11:30:27.827087  595895 main.go:141] libmachine: (addons-214441) DBG |       <model type='virtio'/>
	I0929 11:30:27.827120  595895 main.go:141] libmachine: (addons-214441) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	I0929 11:30:27.827133  595895 main.go:141] libmachine: (addons-214441) DBG |     </interface>
	I0929 11:30:27.827141  595895 main.go:141] libmachine: (addons-214441) DBG |     <interface type='network'>
	I0929 11:30:27.827146  595895 main.go:141] libmachine: (addons-214441) DBG |       <mac address='52:54:00:b8:70:d1'/>
	I0929 11:30:27.827154  595895 main.go:141] libmachine: (addons-214441) DBG |       <source network='default'/>
	I0929 11:30:27.827172  595895 main.go:141] libmachine: (addons-214441) DBG |       <model type='virtio'/>
	I0929 11:30:27.827197  595895 main.go:141] libmachine: (addons-214441) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	I0929 11:30:27.827208  595895 main.go:141] libmachine: (addons-214441) DBG |     </interface>
	I0929 11:30:27.827218  595895 main.go:141] libmachine: (addons-214441) DBG |     <serial type='pty'>
	I0929 11:30:27.827232  595895 main.go:141] libmachine: (addons-214441) DBG |       <target type='isa-serial' port='0'>
	I0929 11:30:27.827252  595895 main.go:141] libmachine: (addons-214441) DBG |         <model name='isa-serial'/>
	I0929 11:30:27.827267  595895 main.go:141] libmachine: (addons-214441) DBG |       </target>
	I0929 11:30:27.827295  595895 main.go:141] libmachine: (addons-214441) DBG |     </serial>
	I0929 11:30:27.827306  595895 main.go:141] libmachine: (addons-214441) DBG |     <console type='pty'>
	I0929 11:30:27.827316  595895 main.go:141] libmachine: (addons-214441) DBG |       <target type='serial' port='0'/>
	I0929 11:30:27.827327  595895 main.go:141] libmachine: (addons-214441) DBG |     </console>
	I0929 11:30:27.827337  595895 main.go:141] libmachine: (addons-214441) DBG |     <input type='mouse' bus='ps2'/>
	I0929 11:30:27.827353  595895 main.go:141] libmachine: (addons-214441) DBG |     <input type='keyboard' bus='ps2'/>
	I0929 11:30:27.827365  595895 main.go:141] libmachine: (addons-214441) DBG |     <audio id='1' type='none'/>
	I0929 11:30:27.827381  595895 main.go:141] libmachine: (addons-214441) DBG |     <memballoon model='virtio'>
	I0929 11:30:27.827396  595895 main.go:141] libmachine: (addons-214441) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	I0929 11:30:27.827407  595895 main.go:141] libmachine: (addons-214441) DBG |     </memballoon>
	I0929 11:30:27.827416  595895 main.go:141] libmachine: (addons-214441) DBG |     <rng model='virtio'>
	I0929 11:30:27.827462  595895 main.go:141] libmachine: (addons-214441) DBG |       <backend model='random'>/dev/random</backend>
	I0929 11:30:27.827477  595895 main.go:141] libmachine: (addons-214441) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	I0929 11:30:27.827484  595895 main.go:141] libmachine: (addons-214441) DBG |     </rng>
	I0929 11:30:27.827492  595895 main.go:141] libmachine: (addons-214441) DBG |   </devices>
	I0929 11:30:27.827507  595895 main.go:141] libmachine: (addons-214441) DBG | </domain>
	I0929 11:30:27.827523  595895 main.go:141] libmachine: (addons-214441) DBG | 
	I0929 11:30:29.153785  595895 main.go:141] libmachine: (addons-214441) waiting for domain to start...
	I0929 11:30:29.155338  595895 main.go:141] libmachine: (addons-214441) domain is now running
	I0929 11:30:29.155366  595895 main.go:141] libmachine: (addons-214441) waiting for IP...
	I0929 11:30:29.156233  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:29.156741  595895 main.go:141] libmachine: (addons-214441) DBG | no network interface addresses found for domain addons-214441 (source=lease)
	I0929 11:30:29.156768  595895 main.go:141] libmachine: (addons-214441) DBG | trying to list again with source=arp
	I0929 11:30:29.157097  595895 main.go:141] libmachine: (addons-214441) DBG | unable to find current IP address of domain addons-214441 in network mk-addons-214441 (interfaces detected: [])
	I0929 11:30:29.157173  595895 main.go:141] libmachine: (addons-214441) DBG | I0929 11:30:29.157084  595923 retry.go:31] will retry after 193.130078ms: waiting for domain to come up
	I0929 11:30:29.351641  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:29.352088  595895 main.go:141] libmachine: (addons-214441) DBG | no network interface addresses found for domain addons-214441 (source=lease)
	I0929 11:30:29.352131  595895 main.go:141] libmachine: (addons-214441) DBG | trying to list again with source=arp
	I0929 11:30:29.352401  595895 main.go:141] libmachine: (addons-214441) DBG | unable to find current IP address of domain addons-214441 in network mk-addons-214441 (interfaces detected: [])
	I0929 11:30:29.352453  595895 main.go:141] libmachine: (addons-214441) DBG | I0929 11:30:29.352389  595923 retry.go:31] will retry after 298.936458ms: waiting for domain to come up
	I0929 11:30:29.653209  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:29.653776  595895 main.go:141] libmachine: (addons-214441) DBG | no network interface addresses found for domain addons-214441 (source=lease)
	I0929 11:30:29.653812  595895 main.go:141] libmachine: (addons-214441) DBG | trying to list again with source=arp
	I0929 11:30:29.654092  595895 main.go:141] libmachine: (addons-214441) DBG | unable to find current IP address of domain addons-214441 in network mk-addons-214441 (interfaces detected: [])
	I0929 11:30:29.654145  595895 main.go:141] libmachine: (addons-214441) DBG | I0929 11:30:29.654057  595923 retry.go:31] will retry after 319.170448ms: waiting for domain to come up
	I0929 11:30:29.974953  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:29.975656  595895 main.go:141] libmachine: (addons-214441) DBG | no network interface addresses found for domain addons-214441 (source=lease)
	I0929 11:30:29.975697  595895 main.go:141] libmachine: (addons-214441) DBG | trying to list again with source=arp
	I0929 11:30:29.976026  595895 main.go:141] libmachine: (addons-214441) DBG | unable to find current IP address of domain addons-214441 in network mk-addons-214441 (interfaces detected: [])
	I0929 11:30:29.976053  595895 main.go:141] libmachine: (addons-214441) DBG | I0929 11:30:29.976008  595923 retry.go:31] will retry after 599.248845ms: waiting for domain to come up
	I0929 11:30:30.576933  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:30.577607  595895 main.go:141] libmachine: (addons-214441) DBG | no network interface addresses found for domain addons-214441 (source=lease)
	I0929 11:30:30.577638  595895 main.go:141] libmachine: (addons-214441) DBG | trying to list again with source=arp
	I0929 11:30:30.577976  595895 main.go:141] libmachine: (addons-214441) DBG | unable to find current IP address of domain addons-214441 in network mk-addons-214441 (interfaces detected: [])
	I0929 11:30:30.578001  595895 main.go:141] libmachine: (addons-214441) DBG | I0929 11:30:30.577944  595923 retry.go:31] will retry after 506.439756ms: waiting for domain to come up
	I0929 11:30:31.085911  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:31.086486  595895 main.go:141] libmachine: (addons-214441) DBG | no network interface addresses found for domain addons-214441 (source=lease)
	I0929 11:30:31.086516  595895 main.go:141] libmachine: (addons-214441) DBG | trying to list again with source=arp
	I0929 11:30:31.086838  595895 main.go:141] libmachine: (addons-214441) DBG | unable to find current IP address of domain addons-214441 in network mk-addons-214441 (interfaces detected: [])
	I0929 11:30:31.086901  595895 main.go:141] libmachine: (addons-214441) DBG | I0929 11:30:31.086827  595923 retry.go:31] will retry after 714.950089ms: waiting for domain to come up
	I0929 11:30:31.803913  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:31.804432  595895 main.go:141] libmachine: (addons-214441) DBG | no network interface addresses found for domain addons-214441 (source=lease)
	I0929 11:30:31.804465  595895 main.go:141] libmachine: (addons-214441) DBG | trying to list again with source=arp
	I0929 11:30:31.804799  595895 main.go:141] libmachine: (addons-214441) DBG | unable to find current IP address of domain addons-214441 in network mk-addons-214441 (interfaces detected: [])
	I0929 11:30:31.804835  595895 main.go:141] libmachine: (addons-214441) DBG | I0929 11:30:31.804762  595923 retry.go:31] will retry after 948.596157ms: waiting for domain to come up
	I0929 11:30:32.755226  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:32.755814  595895 main.go:141] libmachine: (addons-214441) DBG | no network interface addresses found for domain addons-214441 (source=lease)
	I0929 11:30:32.755837  595895 main.go:141] libmachine: (addons-214441) DBG | trying to list again with source=arp
	I0929 11:30:32.756159  595895 main.go:141] libmachine: (addons-214441) DBG | unable to find current IP address of domain addons-214441 in network mk-addons-214441 (interfaces detected: [])
	I0929 11:30:32.756191  595895 main.go:141] libmachine: (addons-214441) DBG | I0929 11:30:32.756135  595923 retry.go:31] will retry after 1.377051804s: waiting for domain to come up
	I0929 11:30:34.136012  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:34.136582  595895 main.go:141] libmachine: (addons-214441) DBG | no network interface addresses found for domain addons-214441 (source=lease)
	I0929 11:30:34.136605  595895 main.go:141] libmachine: (addons-214441) DBG | trying to list again with source=arp
	I0929 11:30:34.136880  595895 main.go:141] libmachine: (addons-214441) DBG | unable to find current IP address of domain addons-214441 in network mk-addons-214441 (interfaces detected: [])
	I0929 11:30:34.136912  595895 main.go:141] libmachine: (addons-214441) DBG | I0929 11:30:34.136849  595923 retry.go:31] will retry after 1.34696154s: waiting for domain to come up
	I0929 11:30:35.485739  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:35.486269  595895 main.go:141] libmachine: (addons-214441) DBG | no network interface addresses found for domain addons-214441 (source=lease)
	I0929 11:30:35.486292  595895 main.go:141] libmachine: (addons-214441) DBG | trying to list again with source=arp
	I0929 11:30:35.486548  595895 main.go:141] libmachine: (addons-214441) DBG | unable to find current IP address of domain addons-214441 in network mk-addons-214441 (interfaces detected: [])
	I0929 11:30:35.486587  595895 main.go:141] libmachine: (addons-214441) DBG | I0929 11:30:35.486521  595923 retry.go:31] will retry after 1.574508192s: waiting for domain to come up
	I0929 11:30:37.063528  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:37.064142  595895 main.go:141] libmachine: (addons-214441) DBG | no network interface addresses found for domain addons-214441 (source=lease)
	I0929 11:30:37.064170  595895 main.go:141] libmachine: (addons-214441) DBG | trying to list again with source=arp
	I0929 11:30:37.064559  595895 main.go:141] libmachine: (addons-214441) DBG | unable to find current IP address of domain addons-214441 in network mk-addons-214441 (interfaces detected: [])
	I0929 11:30:37.064594  595895 main.go:141] libmachine: (addons-214441) DBG | I0929 11:30:37.064489  595923 retry.go:31] will retry after 2.067291223s: waiting for domain to come up
	I0929 11:30:39.135405  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:39.135998  595895 main.go:141] libmachine: (addons-214441) DBG | no network interface addresses found for domain addons-214441 (source=lease)
	I0929 11:30:39.136030  595895 main.go:141] libmachine: (addons-214441) DBG | trying to list again with source=arp
	I0929 11:30:39.136354  595895 main.go:141] libmachine: (addons-214441) DBG | unable to find current IP address of domain addons-214441 in network mk-addons-214441 (interfaces detected: [])
	I0929 11:30:39.136412  595895 main.go:141] libmachine: (addons-214441) DBG | I0929 11:30:39.136338  595923 retry.go:31] will retry after 3.104602856s: waiting for domain to come up
	I0929 11:30:42.242410  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:42.242939  595895 main.go:141] libmachine: (addons-214441) DBG | no network interface addresses found for domain addons-214441 (source=lease)
	I0929 11:30:42.242965  595895 main.go:141] libmachine: (addons-214441) DBG | trying to list again with source=arp
	I0929 11:30:42.243288  595895 main.go:141] libmachine: (addons-214441) DBG | unable to find current IP address of domain addons-214441 in network mk-addons-214441 (interfaces detected: [])
	I0929 11:30:42.243344  595895 main.go:141] libmachine: (addons-214441) DBG | I0929 11:30:42.243280  595923 retry.go:31] will retry after 4.150705767s: waiting for domain to come up
	I0929 11:30:46.398779  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:46.399347  595895 main.go:141] libmachine: (addons-214441) found domain IP: 192.168.39.76
	I0929 11:30:46.399374  595895 main.go:141] libmachine: (addons-214441) reserving static IP address...
	I0929 11:30:46.399388  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has current primary IP address 192.168.39.76 and MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:46.399901  595895 main.go:141] libmachine: (addons-214441) DBG | unable to find host DHCP lease matching {name: "addons-214441", mac: "52:54:00:98:9c:d8", ip: "192.168.39.76"} in network mk-addons-214441
	I0929 11:30:46.587177  595895 main.go:141] libmachine: (addons-214441) DBG | Getting to WaitForSSH function...
	I0929 11:30:46.587215  595895 main.go:141] libmachine: (addons-214441) reserved static IP address 192.168.39.76 for domain addons-214441
	I0929 11:30:46.587228  595895 main.go:141] libmachine: (addons-214441) waiting for SSH...
	I0929 11:30:46.590179  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:46.590588  595895 main.go:141] libmachine: (addons-214441) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:9c:d8", ip: ""} in network mk-addons-214441: {Iface:virbr1 ExpiryTime:2025-09-29 12:30:43 +0000 UTC Type:0 Mac:52:54:00:98:9c:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:minikube Clientid:01:52:54:00:98:9c:d8}
	I0929 11:30:46.590626  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined IP address 192.168.39.76 and MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:46.590750  595895 main.go:141] libmachine: (addons-214441) DBG | Using SSH client type: external
	I0929 11:30:46.590791  595895 main.go:141] libmachine: (addons-214441) DBG | Using SSH private key: /home/jenkins/minikube-integration/21654-591397/.minikube/machines/addons-214441/id_rsa (-rw-------)
	I0929 11:30:46.590840  595895 main.go:141] libmachine: (addons-214441) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.76 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21654-591397/.minikube/machines/addons-214441/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0929 11:30:46.590868  595895 main.go:141] libmachine: (addons-214441) DBG | About to run SSH command:
	I0929 11:30:46.590883  595895 main.go:141] libmachine: (addons-214441) DBG | exit 0
	I0929 11:30:46.729877  595895 main.go:141] libmachine: (addons-214441) DBG | SSH cmd err, output: <nil>: 
	I0929 11:30:46.730171  595895 main.go:141] libmachine: (addons-214441) domain creation complete
	I0929 11:30:46.730534  595895 main.go:141] libmachine: (addons-214441) Calling .GetConfigRaw
	I0929 11:30:46.731196  595895 main.go:141] libmachine: (addons-214441) Calling .DriverName
	I0929 11:30:46.731410  595895 main.go:141] libmachine: (addons-214441) Calling .DriverName
	I0929 11:30:46.731600  595895 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0929 11:30:46.731623  595895 main.go:141] libmachine: (addons-214441) Calling .GetState
	I0929 11:30:46.732882  595895 main.go:141] libmachine: Detecting operating system of created instance...
	I0929 11:30:46.732897  595895 main.go:141] libmachine: Waiting for SSH to be available...
	I0929 11:30:46.732902  595895 main.go:141] libmachine: Getting to WaitForSSH function...
	I0929 11:30:46.732908  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHHostname
	I0929 11:30:46.735685  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:46.736210  595895 main.go:141] libmachine: (addons-214441) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:9c:d8", ip: ""} in network mk-addons-214441: {Iface:virbr1 ExpiryTime:2025-09-29 12:30:43 +0000 UTC Type:0 Mac:52:54:00:98:9c:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:addons-214441 Clientid:01:52:54:00:98:9c:d8}
	I0929 11:30:46.736238  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined IP address 192.168.39.76 and MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:46.736397  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHPort
	I0929 11:30:46.736652  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHKeyPath
	I0929 11:30:46.736854  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHKeyPath
	I0929 11:30:46.736998  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHUsername
	I0929 11:30:46.737156  595895 main.go:141] libmachine: Using SSH client type: native
	I0929 11:30:46.737392  595895 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.39.76 22 <nil> <nil>}
	I0929 11:30:46.737403  595895 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0929 11:30:46.844278  595895 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0929 11:30:46.844312  595895 main.go:141] libmachine: Detecting the provisioner...
	I0929 11:30:46.844324  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHHostname
	I0929 11:30:46.848224  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:46.849228  595895 main.go:141] libmachine: (addons-214441) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:9c:d8", ip: ""} in network mk-addons-214441: {Iface:virbr1 ExpiryTime:2025-09-29 12:30:43 +0000 UTC Type:0 Mac:52:54:00:98:9c:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:addons-214441 Clientid:01:52:54:00:98:9c:d8}
	I0929 11:30:46.849264  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined IP address 192.168.39.76 and MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:46.849457  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHPort
	I0929 11:30:46.849706  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHKeyPath
	I0929 11:30:46.849884  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHKeyPath
	I0929 11:30:46.850038  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHUsername
	I0929 11:30:46.850227  595895 main.go:141] libmachine: Using SSH client type: native
	I0929 11:30:46.850481  595895 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.39.76 22 <nil> <nil>}
	I0929 11:30:46.850494  595895 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0929 11:30:46.959386  595895 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2025.02-dirty
	ID=buildroot
	VERSION_ID=2025.02
	PRETTY_NAME="Buildroot 2025.02"
	
	I0929 11:30:46.959537  595895 main.go:141] libmachine: found compatible host: buildroot
	I0929 11:30:46.959560  595895 main.go:141] libmachine: Provisioning with buildroot...
	I0929 11:30:46.959572  595895 main.go:141] libmachine: (addons-214441) Calling .GetMachineName
	I0929 11:30:46.959897  595895 buildroot.go:166] provisioning hostname "addons-214441"
	I0929 11:30:46.959920  595895 main.go:141] libmachine: (addons-214441) Calling .GetMachineName
	I0929 11:30:46.960158  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHHostname
	I0929 11:30:46.963429  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:46.963851  595895 main.go:141] libmachine: (addons-214441) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:9c:d8", ip: ""} in network mk-addons-214441: {Iface:virbr1 ExpiryTime:2025-09-29 12:30:43 +0000 UTC Type:0 Mac:52:54:00:98:9c:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:addons-214441 Clientid:01:52:54:00:98:9c:d8}
	I0929 11:30:46.963892  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined IP address 192.168.39.76 and MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:46.964187  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHPort
	I0929 11:30:46.964389  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHKeyPath
	I0929 11:30:46.964590  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHKeyPath
	I0929 11:30:46.964750  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHUsername
	I0929 11:30:46.964942  595895 main.go:141] libmachine: Using SSH client type: native
	I0929 11:30:46.965188  595895 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.39.76 22 <nil> <nil>}
	I0929 11:30:46.965202  595895 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-214441 && echo "addons-214441" | sudo tee /etc/hostname
	I0929 11:30:47.092132  595895 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-214441
	
	I0929 11:30:47.092159  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHHostname
	I0929 11:30:47.095605  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:47.096136  595895 main.go:141] libmachine: (addons-214441) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:9c:d8", ip: ""} in network mk-addons-214441: {Iface:virbr1 ExpiryTime:2025-09-29 12:30:43 +0000 UTC Type:0 Mac:52:54:00:98:9c:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:addons-214441 Clientid:01:52:54:00:98:9c:d8}
	I0929 11:30:47.096169  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined IP address 192.168.39.76 and MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:47.096340  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHPort
	I0929 11:30:47.096555  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHKeyPath
	I0929 11:30:47.096747  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHKeyPath
	I0929 11:30:47.096902  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHUsername
	I0929 11:30:47.097123  595895 main.go:141] libmachine: Using SSH client type: native
	I0929 11:30:47.097351  595895 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.39.76 22 <nil> <nil>}
	I0929 11:30:47.097369  595895 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-214441' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-214441/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-214441' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0929 11:30:47.216048  595895 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0929 11:30:47.216081  595895 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21654-591397/.minikube CaCertPath:/home/jenkins/minikube-integration/21654-591397/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21654-591397/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21654-591397/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21654-591397/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21654-591397/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21654-591397/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21654-591397/.minikube}
	I0929 11:30:47.216160  595895 buildroot.go:174] setting up certificates
	I0929 11:30:47.216176  595895 provision.go:84] configureAuth start
	I0929 11:30:47.216187  595895 main.go:141] libmachine: (addons-214441) Calling .GetMachineName
	I0929 11:30:47.216551  595895 main.go:141] libmachine: (addons-214441) Calling .GetIP
	I0929 11:30:47.219822  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:47.220206  595895 main.go:141] libmachine: (addons-214441) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:9c:d8", ip: ""} in network mk-addons-214441: {Iface:virbr1 ExpiryTime:2025-09-29 12:30:43 +0000 UTC Type:0 Mac:52:54:00:98:9c:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:addons-214441 Clientid:01:52:54:00:98:9c:d8}
	I0929 11:30:47.220241  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined IP address 192.168.39.76 and MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:47.220424  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHHostname
	I0929 11:30:47.222925  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:47.223320  595895 main.go:141] libmachine: (addons-214441) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:9c:d8", ip: ""} in network mk-addons-214441: {Iface:virbr1 ExpiryTime:2025-09-29 12:30:43 +0000 UTC Type:0 Mac:52:54:00:98:9c:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:addons-214441 Clientid:01:52:54:00:98:9c:d8}
	I0929 11:30:47.223351  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined IP address 192.168.39.76 and MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:47.223603  595895 provision.go:143] copyHostCerts
	I0929 11:30:47.223674  595895 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21654-591397/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21654-591397/.minikube/cert.pem (1123 bytes)
	I0929 11:30:47.223815  595895 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21654-591397/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21654-591397/.minikube/key.pem (1675 bytes)
	I0929 11:30:47.223908  595895 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21654-591397/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21654-591397/.minikube/ca.pem (1082 bytes)
	I0929 11:30:47.223987  595895 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21654-591397/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21654-591397/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21654-591397/.minikube/certs/ca-key.pem org=jenkins.addons-214441 san=[127.0.0.1 192.168.39.76 addons-214441 localhost minikube]
	I0929 11:30:47.541100  595895 provision.go:177] copyRemoteCerts
	I0929 11:30:47.541199  595895 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0929 11:30:47.541238  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHHostname
	I0929 11:30:47.544486  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:47.544940  595895 main.go:141] libmachine: (addons-214441) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:9c:d8", ip: ""} in network mk-addons-214441: {Iface:virbr1 ExpiryTime:2025-09-29 12:30:43 +0000 UTC Type:0 Mac:52:54:00:98:9c:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:addons-214441 Clientid:01:52:54:00:98:9c:d8}
	I0929 11:30:47.545024  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined IP address 192.168.39.76 and MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:47.545286  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHPort
	I0929 11:30:47.545574  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHKeyPath
	I0929 11:30:47.545766  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHUsername
	I0929 11:30:47.545940  595895 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21654-591397/.minikube/machines/addons-214441/id_rsa Username:docker}
	I0929 11:30:47.632441  595895 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21654-591397/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0929 11:30:47.665928  595895 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21654-591397/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0929 11:30:47.699464  595895 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21654-591397/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0929 11:30:47.731874  595895 provision.go:87] duration metric: took 515.680125ms to configureAuth
	I0929 11:30:47.731904  595895 buildroot.go:189] setting minikube options for container-runtime
	I0929 11:30:47.732120  595895 config.go:182] Loaded profile config "addons-214441": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0929 11:30:47.732187  595895 main.go:141] libmachine: (addons-214441) Calling .DriverName
	I0929 11:30:47.732484  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHHostname
	I0929 11:30:47.735606  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:47.736098  595895 main.go:141] libmachine: (addons-214441) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:9c:d8", ip: ""} in network mk-addons-214441: {Iface:virbr1 ExpiryTime:2025-09-29 12:30:43 +0000 UTC Type:0 Mac:52:54:00:98:9c:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:addons-214441 Clientid:01:52:54:00:98:9c:d8}
	I0929 11:30:47.736147  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined IP address 192.168.39.76 and MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:47.736408  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHPort
	I0929 11:30:47.736676  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHKeyPath
	I0929 11:30:47.736876  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHKeyPath
	I0929 11:30:47.737026  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHUsername
	I0929 11:30:47.737286  595895 main.go:141] libmachine: Using SSH client type: native
	I0929 11:30:47.737503  595895 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.39.76 22 <nil> <nil>}
	I0929 11:30:47.737522  595895 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0929 11:30:47.845243  595895 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0929 11:30:47.845278  595895 buildroot.go:70] root file system type: tmpfs
	I0929 11:30:47.845464  595895 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0929 11:30:47.845493  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHHostname
	I0929 11:30:47.848685  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:47.849080  595895 main.go:141] libmachine: (addons-214441) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:9c:d8", ip: ""} in network mk-addons-214441: {Iface:virbr1 ExpiryTime:2025-09-29 12:30:43 +0000 UTC Type:0 Mac:52:54:00:98:9c:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:addons-214441 Clientid:01:52:54:00:98:9c:d8}
	I0929 11:30:47.849125  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined IP address 192.168.39.76 and MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:47.849333  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHPort
	I0929 11:30:47.849561  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHKeyPath
	I0929 11:30:47.849749  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHKeyPath
	I0929 11:30:47.849921  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHUsername
	I0929 11:30:47.850156  595895 main.go:141] libmachine: Using SSH client type: native
	I0929 11:30:47.850438  595895 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.39.76 22 <nil> <nil>}
	I0929 11:30:47.850513  595895 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target minikube-automount.service nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	[Service]
	Type=notify
	Restart=always
	
	
	
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0929 11:30:47.980841  595895 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target minikube-automount.service nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	[Service]
	Type=notify
	Restart=always
	
	
	
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I0929 11:30:47.980885  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHHostname
	I0929 11:30:47.984021  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:47.984467  595895 main.go:141] libmachine: (addons-214441) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:9c:d8", ip: ""} in network mk-addons-214441: {Iface:virbr1 ExpiryTime:2025-09-29 12:30:43 +0000 UTC Type:0 Mac:52:54:00:98:9c:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:addons-214441 Clientid:01:52:54:00:98:9c:d8}
	I0929 11:30:47.984505  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined IP address 192.168.39.76 and MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:47.984746  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHPort
	I0929 11:30:47.984964  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHKeyPath
	I0929 11:30:47.985145  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHKeyPath
	I0929 11:30:47.985345  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHUsername
	I0929 11:30:47.985533  595895 main.go:141] libmachine: Using SSH client type: native
	I0929 11:30:47.985753  595895 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.39.76 22 <nil> <nil>}
	I0929 11:30:47.985769  595895 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0929 11:30:48.944806  595895 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink '/etc/systemd/system/multi-user.target.wants/docker.service' → '/usr/lib/systemd/system/docker.service'.
	
	I0929 11:30:48.944837  595895 main.go:141] libmachine: Checking connection to Docker...
	I0929 11:30:48.944847  595895 main.go:141] libmachine: (addons-214441) Calling .GetURL
	I0929 11:30:48.946423  595895 main.go:141] libmachine: (addons-214441) DBG | using libvirt version 8000000
	I0929 11:30:48.949334  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:48.949705  595895 main.go:141] libmachine: (addons-214441) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:9c:d8", ip: ""} in network mk-addons-214441: {Iface:virbr1 ExpiryTime:2025-09-29 12:30:43 +0000 UTC Type:0 Mac:52:54:00:98:9c:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:addons-214441 Clientid:01:52:54:00:98:9c:d8}
	I0929 11:30:48.949727  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined IP address 192.168.39.76 and MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:48.949905  595895 main.go:141] libmachine: Docker is up and running!
	I0929 11:30:48.949918  595895 main.go:141] libmachine: Reticulating splines...
	I0929 11:30:48.949926  595895 client.go:171] duration metric: took 22.382562322s to LocalClient.Create
	I0929 11:30:48.949961  595895 start.go:167] duration metric: took 22.382646372s to libmachine.API.Create "addons-214441"
	I0929 11:30:48.949977  595895 start.go:293] postStartSetup for "addons-214441" (driver="kvm2")
	I0929 11:30:48.949995  595895 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0929 11:30:48.950016  595895 main.go:141] libmachine: (addons-214441) Calling .DriverName
	I0929 11:30:48.950285  595895 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0929 11:30:48.950309  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHHostname
	I0929 11:30:48.952588  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:48.952941  595895 main.go:141] libmachine: (addons-214441) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:9c:d8", ip: ""} in network mk-addons-214441: {Iface:virbr1 ExpiryTime:2025-09-29 12:30:43 +0000 UTC Type:0 Mac:52:54:00:98:9c:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:addons-214441 Clientid:01:52:54:00:98:9c:d8}
	I0929 11:30:48.952973  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined IP address 192.168.39.76 and MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:48.953140  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHPort
	I0929 11:30:48.953358  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHKeyPath
	I0929 11:30:48.953522  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHUsername
	I0929 11:30:48.953678  595895 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21654-591397/.minikube/machines/addons-214441/id_rsa Username:docker}
	I0929 11:30:49.038834  595895 ssh_runner.go:195] Run: cat /etc/os-release
	I0929 11:30:49.044530  595895 info.go:137] Remote host: Buildroot 2025.02
	I0929 11:30:49.044562  595895 filesync.go:126] Scanning /home/jenkins/minikube-integration/21654-591397/.minikube/addons for local assets ...
	I0929 11:30:49.044653  595895 filesync.go:126] Scanning /home/jenkins/minikube-integration/21654-591397/.minikube/files for local assets ...
	I0929 11:30:49.044700  595895 start.go:296] duration metric: took 94.715435ms for postStartSetup
	I0929 11:30:49.044748  595895 main.go:141] libmachine: (addons-214441) Calling .GetConfigRaw
	I0929 11:30:49.045427  595895 main.go:141] libmachine: (addons-214441) Calling .GetIP
	I0929 11:30:49.048440  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:49.048801  595895 main.go:141] libmachine: (addons-214441) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:9c:d8", ip: ""} in network mk-addons-214441: {Iface:virbr1 ExpiryTime:2025-09-29 12:30:43 +0000 UTC Type:0 Mac:52:54:00:98:9c:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:addons-214441 Clientid:01:52:54:00:98:9c:d8}
	I0929 11:30:49.048825  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined IP address 192.168.39.76 and MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:49.049194  595895 profile.go:143] Saving config to /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/addons-214441/config.json ...
	I0929 11:30:49.049405  595895 start.go:128] duration metric: took 22.499712752s to createHost
	I0929 11:30:49.049432  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHHostname
	I0929 11:30:49.052122  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:49.052625  595895 main.go:141] libmachine: (addons-214441) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:9c:d8", ip: ""} in network mk-addons-214441: {Iface:virbr1 ExpiryTime:2025-09-29 12:30:43 +0000 UTC Type:0 Mac:52:54:00:98:9c:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:addons-214441 Clientid:01:52:54:00:98:9c:d8}
	I0929 11:30:49.052654  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined IP address 192.168.39.76 and MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:49.052915  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHPort
	I0929 11:30:49.053180  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHKeyPath
	I0929 11:30:49.053373  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHKeyPath
	I0929 11:30:49.053538  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHUsername
	I0929 11:30:49.053724  595895 main.go:141] libmachine: Using SSH client type: native
	I0929 11:30:49.053929  595895 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.39.76 22 <nil> <nil>}
	I0929 11:30:49.053940  595895 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0929 11:30:49.163416  595895 main.go:141] libmachine: SSH cmd err, output: <nil>: 1759145449.126116077
	
	I0929 11:30:49.163441  595895 fix.go:216] guest clock: 1759145449.126116077
	I0929 11:30:49.163449  595895 fix.go:229] Guest: 2025-09-29 11:30:49.126116077 +0000 UTC Remote: 2025-09-29 11:30:49.049418276 +0000 UTC m=+22.624163516 (delta=76.697801ms)
	I0929 11:30:49.163493  595895 fix.go:200] guest clock delta is within tolerance: 76.697801ms
	I0929 11:30:49.163499  595895 start.go:83] releasing machines lock for "addons-214441", held for 22.613874794s
	I0929 11:30:49.163528  595895 main.go:141] libmachine: (addons-214441) Calling .DriverName
	I0929 11:30:49.163838  595895 main.go:141] libmachine: (addons-214441) Calling .GetIP
	I0929 11:30:49.166822  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:49.167209  595895 main.go:141] libmachine: (addons-214441) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:9c:d8", ip: ""} in network mk-addons-214441: {Iface:virbr1 ExpiryTime:2025-09-29 12:30:43 +0000 UTC Type:0 Mac:52:54:00:98:9c:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:addons-214441 Clientid:01:52:54:00:98:9c:d8}
	I0929 11:30:49.167249  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined IP address 192.168.39.76 and MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:49.167420  595895 main.go:141] libmachine: (addons-214441) Calling .DriverName
	I0929 11:30:49.168022  595895 main.go:141] libmachine: (addons-214441) Calling .DriverName
	I0929 11:30:49.168252  595895 main.go:141] libmachine: (addons-214441) Calling .DriverName
	I0929 11:30:49.168368  595895 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0929 11:30:49.168430  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHHostname
	I0929 11:30:49.168489  595895 ssh_runner.go:195] Run: cat /version.json
	I0929 11:30:49.168513  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHHostname
	I0929 11:30:49.172018  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:49.172253  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:49.172513  595895 main.go:141] libmachine: (addons-214441) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:9c:d8", ip: ""} in network mk-addons-214441: {Iface:virbr1 ExpiryTime:2025-09-29 12:30:43 +0000 UTC Type:0 Mac:52:54:00:98:9c:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:addons-214441 Clientid:01:52:54:00:98:9c:d8}
	I0929 11:30:49.172540  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined IP address 192.168.39.76 and MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:49.172628  595895 main.go:141] libmachine: (addons-214441) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:9c:d8", ip: ""} in network mk-addons-214441: {Iface:virbr1 ExpiryTime:2025-09-29 12:30:43 +0000 UTC Type:0 Mac:52:54:00:98:9c:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:addons-214441 Clientid:01:52:54:00:98:9c:d8}
	I0929 11:30:49.172666  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined IP address 192.168.39.76 and MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:49.172701  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHPort
	I0929 11:30:49.172958  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHPort
	I0929 11:30:49.173000  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHKeyPath
	I0929 11:30:49.173136  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHKeyPath
	I0929 11:30:49.173213  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHUsername
	I0929 11:30:49.173301  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHUsername
	I0929 11:30:49.173395  595895 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21654-591397/.minikube/machines/addons-214441/id_rsa Username:docker}
	I0929 11:30:49.173457  595895 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21654-591397/.minikube/machines/addons-214441/id_rsa Username:docker}
	I0929 11:30:49.251709  595895 ssh_runner.go:195] Run: systemctl --version
	I0929 11:30:49.275600  595895 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0929 11:30:49.282636  595895 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0929 11:30:49.282710  595895 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0929 11:30:49.304880  595895 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0929 11:30:49.304913  595895 start.go:495] detecting cgroup driver to use...
	I0929 11:30:49.305043  595895 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0929 11:30:49.330757  595895 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0929 11:30:49.345061  595895 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0929 11:30:49.359226  595895 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0929 11:30:49.359329  595895 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0929 11:30:49.373874  595895 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0929 11:30:49.388075  595895 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0929 11:30:49.401811  595895 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0929 11:30:49.415626  595895 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0929 11:30:49.431189  595895 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0929 11:30:49.445445  595895 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0929 11:30:49.459477  595895 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0929 11:30:49.473176  595895 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0929 11:30:49.485689  595895 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0929 11:30:49.485783  595895 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0929 11:30:49.499975  595895 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0929 11:30:49.513013  595895 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 11:30:49.660311  595895 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0929 11:30:49.703655  595895 start.go:495] detecting cgroup driver to use...
	I0929 11:30:49.703755  595895 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0929 11:30:49.722813  595895 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0929 11:30:49.750032  595895 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0929 11:30:49.777529  595895 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0929 11:30:49.795732  595895 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0929 11:30:49.813375  595895 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0929 11:30:49.851205  595895 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0929 11:30:49.869489  595895 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0929 11:30:49.896122  595895 ssh_runner.go:195] Run: which cri-dockerd
	I0929 11:30:49.900877  595895 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0929 11:30:49.914013  595895 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I0929 11:30:49.937663  595895 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0929 11:30:50.087078  595895 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0929 11:30:50.258242  595895 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I0929 11:30:50.258407  595895 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0929 11:30:50.281600  595895 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0929 11:30:50.297843  595895 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 11:30:50.442188  595895 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0929 11:30:51.468324  595895 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.026092315s)
	I0929 11:30:51.468405  595895 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0929 11:30:51.485284  595895 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0929 11:30:51.502338  595895 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0929 11:30:51.520247  595895 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0929 11:30:51.674618  595895 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0929 11:30:51.823542  595895 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 11:30:51.969743  595895 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0929 11:30:52.010885  595895 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0929 11:30:52.027992  595895 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 11:30:52.187556  595895 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0929 11:30:52.300820  595895 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0929 11:30:52.324658  595895 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0929 11:30:52.324786  595895 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0929 11:30:52.331994  595895 start.go:563] Will wait 60s for crictl version
	I0929 11:30:52.332070  595895 ssh_runner.go:195] Run: which crictl
	I0929 11:30:52.336923  595895 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0929 11:30:52.378177  595895 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.4.0
	RuntimeApiVersion:  v1
	I0929 11:30:52.378280  595895 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0929 11:30:52.410851  595895 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0929 11:30:52.543475  595895 out.go:252] * Preparing Kubernetes v1.34.0 on Docker 28.4.0 ...
	I0929 11:30:52.543553  595895 main.go:141] libmachine: (addons-214441) Calling .GetIP
	I0929 11:30:52.546859  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:52.547288  595895 main.go:141] libmachine: (addons-214441) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:9c:d8", ip: ""} in network mk-addons-214441: {Iface:virbr1 ExpiryTime:2025-09-29 12:30:43 +0000 UTC Type:0 Mac:52:54:00:98:9c:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:addons-214441 Clientid:01:52:54:00:98:9c:d8}
	I0929 11:30:52.547313  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined IP address 192.168.39.76 and MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:52.547612  595895 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0929 11:30:52.553031  595895 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0929 11:30:52.570843  595895 kubeadm.go:875] updating cluster {Name:addons-214441 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-214
441 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.76 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fal
se DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0929 11:30:52.570982  595895 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0929 11:30:52.571045  595895 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0929 11:30:52.589813  595895 docker.go:691] Got preloaded images: 
	I0929 11:30:52.589850  595895 docker.go:697] registry.k8s.io/kube-apiserver:v1.34.0 wasn't preloaded
	I0929 11:30:52.589920  595895 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0929 11:30:52.603859  595895 ssh_runner.go:195] Run: which lz4
	I0929 11:30:52.608929  595895 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0929 11:30:52.614449  595895 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0929 11:30:52.614480  595895 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21654-591397/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (353447550 bytes)
	I0929 11:30:54.030641  595895 docker.go:655] duration metric: took 1.421784291s to copy over tarball
	I0929 11:30:54.030729  595895 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0929 11:30:55.448691  595895 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.417923545s)
	I0929 11:30:55.448737  595895 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0929 11:30:55.496341  595895 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0929 11:30:55.514175  595895 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2632 bytes)
	I0929 11:30:55.539628  595895 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0929 11:30:55.556201  595895 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 11:30:55.705196  595895 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0929 11:30:57.773379  595895 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.068131004s)
	I0929 11:30:57.773509  595895 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0929 11:30:57.795878  595895 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.0
	registry.k8s.io/kube-controller-manager:v1.34.0
	registry.k8s.io/kube-proxy:v1.34.0
	registry.k8s.io/kube-scheduler:v1.34.0
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0929 11:30:57.795910  595895 cache_images.go:85] Images are preloaded, skipping loading
	I0929 11:30:57.795931  595895 kubeadm.go:926] updating node { 192.168.39.76 8443 v1.34.0 docker true true} ...
	I0929 11:30:57.796049  595895 kubeadm.go:938] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-214441 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.76
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:addons-214441 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0929 11:30:57.796127  595895 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0929 11:30:57.852690  595895 cni.go:84] Creating CNI manager for ""
	I0929 11:30:57.852756  595895 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0929 11:30:57.852774  595895 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0929 11:30:57.852803  595895 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.76 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-214441 NodeName:addons-214441 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.76"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.76 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/ku
bernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0929 11:30:57.852981  595895 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.76
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-214441"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.76"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.76"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0929 11:30:57.853053  595895 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0929 11:30:57.866164  595895 binaries.go:44] Found k8s binaries, skipping transfer
	I0929 11:30:57.866236  595895 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0929 11:30:57.879054  595895 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0929 11:30:57.901136  595895 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0929 11:30:57.922808  595895 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2217 bytes)
	I0929 11:30:57.944391  595895 ssh_runner.go:195] Run: grep 192.168.39.76	control-plane.minikube.internal$ /etc/hosts
	I0929 11:30:57.949077  595895 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.76	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0929 11:30:57.965713  595895 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 11:30:58.115608  595895 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0929 11:30:58.151915  595895 certs.go:68] Setting up /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/addons-214441 for IP: 192.168.39.76
	I0929 11:30:58.151940  595895 certs.go:194] generating shared ca certs ...
	I0929 11:30:58.151960  595895 certs.go:226] acquiring lock for ca certs: {Name:mk707c73ecd79d5343eca8617a792346e0c7ccb4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 11:30:58.152119  595895 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21654-591397/.minikube/ca.key
	I0929 11:30:58.470474  595895 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21654-591397/.minikube/ca.crt ...
	I0929 11:30:58.470507  595895 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21654-591397/.minikube/ca.crt: {Name:mk182656d7edea57f023d2e0db199cb4225a8b4e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 11:30:58.470704  595895 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21654-591397/.minikube/ca.key ...
	I0929 11:30:58.470715  595895 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21654-591397/.minikube/ca.key: {Name:mkd9949b3876b9f68542fba6d581787f4502134f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 11:30:58.470791  595895 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21654-591397/.minikube/proxy-client-ca.key
	I0929 11:30:58.721631  595895 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21654-591397/.minikube/proxy-client-ca.crt ...
	I0929 11:30:58.721664  595895 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21654-591397/.minikube/proxy-client-ca.crt: {Name:mk28d9b982dd4335b19ce60c764e1cd1a4d53764 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 11:30:58.721838  595895 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21654-591397/.minikube/proxy-client-ca.key ...
	I0929 11:30:58.721850  595895 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21654-591397/.minikube/proxy-client-ca.key: {Name:mk92f9d60795b7f581dcb4003e857f2fb68fb997 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 11:30:58.721920  595895 certs.go:256] generating profile certs ...
	I0929 11:30:58.721989  595895 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/addons-214441/client.key
	I0929 11:30:58.722004  595895 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/addons-214441/client.crt with IP's: []
	I0929 11:30:59.043304  595895 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/addons-214441/client.crt ...
	I0929 11:30:59.043336  595895 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/addons-214441/client.crt: {Name:mkd724da95490eed1b0581ef6c65a2b1785468b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 11:30:59.043499  595895 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/addons-214441/client.key ...
	I0929 11:30:59.043510  595895 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/addons-214441/client.key: {Name:mkba543125a928af6b44a2eb304c49514c816581 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 11:30:59.043578  595895 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/addons-214441/apiserver.key.adbef5ab
	I0929 11:30:59.043598  595895 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/addons-214441/apiserver.crt.adbef5ab with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.76]
	I0929 11:30:59.456164  595895 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/addons-214441/apiserver.crt.adbef5ab ...
	I0929 11:30:59.456200  595895 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/addons-214441/apiserver.crt.adbef5ab: {Name:mk5a23687be38fbd7ef5257880d1d7f5b199f933 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 11:30:59.456424  595895 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/addons-214441/apiserver.key.adbef5ab ...
	I0929 11:30:59.456443  595895 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/addons-214441/apiserver.key.adbef5ab: {Name:mke7b9b847497d2728644e9b30a8393a50e57e5a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 11:30:59.456526  595895 certs.go:381] copying /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/addons-214441/apiserver.crt.adbef5ab -> /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/addons-214441/apiserver.crt
	I0929 11:30:59.456638  595895 certs.go:385] copying /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/addons-214441/apiserver.key.adbef5ab -> /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/addons-214441/apiserver.key
	I0929 11:30:59.456705  595895 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/addons-214441/proxy-client.key
	I0929 11:30:59.456726  595895 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/addons-214441/proxy-client.crt with IP's: []
	I0929 11:30:59.785388  595895 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/addons-214441/proxy-client.crt ...
	I0929 11:30:59.785424  595895 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/addons-214441/proxy-client.crt: {Name:mkb2afc6ab3119c9842fe1ce2f48d7c6196dbfb1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 11:30:59.785611  595895 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/addons-214441/proxy-client.key ...
	I0929 11:30:59.785642  595895 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/addons-214441/proxy-client.key: {Name:mk6b37b3ae22881d553c47031d96c6f22bdfded2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 11:30:59.785833  595895 certs.go:484] found cert: /home/jenkins/minikube-integration/21654-591397/.minikube/certs/ca-key.pem (1675 bytes)
	I0929 11:30:59.785879  595895 certs.go:484] found cert: /home/jenkins/minikube-integration/21654-591397/.minikube/certs/ca.pem (1082 bytes)
	I0929 11:30:59.785905  595895 certs.go:484] found cert: /home/jenkins/minikube-integration/21654-591397/.minikube/certs/cert.pem (1123 bytes)
	I0929 11:30:59.785932  595895 certs.go:484] found cert: /home/jenkins/minikube-integration/21654-591397/.minikube/certs/key.pem (1675 bytes)
	I0929 11:30:59.786662  595895 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21654-591397/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0929 11:30:59.821270  595895 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21654-591397/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0929 11:30:59.853588  595895 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21654-591397/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0929 11:30:59.885559  595895 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21654-591397/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0929 11:30:59.916538  595895 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/addons-214441/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0929 11:30:59.948991  595895 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/addons-214441/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0929 11:30:59.981478  595895 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/addons-214441/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0929 11:31:00.014753  595895 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/addons-214441/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0929 11:31:00.046891  595895 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21654-591397/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0929 11:31:00.079370  595895 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0929 11:31:00.101600  595895 ssh_runner.go:195] Run: openssl version
	I0929 11:31:00.108829  595895 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0929 11:31:00.123448  595895 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0929 11:31:00.129416  595895 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 29 11:30 /usr/share/ca-certificates/minikubeCA.pem
	I0929 11:31:00.129502  595895 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0929 11:31:00.137583  595895 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0929 11:31:00.152396  595895 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0929 11:31:00.157895  595895 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0929 11:31:00.157960  595895 kubeadm.go:392] StartCluster: {Name:addons-214441 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-214441
Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.76 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 11:31:00.158083  595895 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0929 11:31:00.176917  595895 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0929 11:31:00.190119  595895 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0929 11:31:00.203558  595895 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0929 11:31:00.216736  595895 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0929 11:31:00.216758  595895 kubeadm.go:157] found existing configuration files:
	
	I0929 11:31:00.216805  595895 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0929 11:31:00.229008  595895 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0929 11:31:00.229138  595895 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0929 11:31:00.242441  595895 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0929 11:31:00.254460  595895 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0929 11:31:00.254523  595895 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0929 11:31:00.268124  595895 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0929 11:31:00.284523  595895 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0929 11:31:00.284596  595895 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0929 11:31:00.297510  595895 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0929 11:31:00.311858  595895 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0929 11:31:00.311927  595895 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0929 11:31:00.329319  595895 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0929 11:31:00.392668  595895 kubeadm.go:310] [init] Using Kubernetes version: v1.34.0
	I0929 11:31:00.392776  595895 kubeadm.go:310] [preflight] Running pre-flight checks
	I0929 11:31:00.500945  595895 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0929 11:31:00.501073  595895 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0929 11:31:00.501248  595895 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0929 11:31:00.518470  595895 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0929 11:31:00.521672  595895 out.go:252]   - Generating certificates and keys ...
	I0929 11:31:00.521778  595895 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0929 11:31:00.521835  595895 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0929 11:31:00.844406  595895 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0929 11:31:01.356940  595895 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0929 11:31:01.469316  595895 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0929 11:31:01.609628  595895 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0929 11:31:01.854048  595895 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0929 11:31:01.854239  595895 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-214441 localhost] and IPs [192.168.39.76 127.0.0.1 ::1]
	I0929 11:31:02.222219  595895 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0929 11:31:02.222361  595895 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-214441 localhost] and IPs [192.168.39.76 127.0.0.1 ::1]
	I0929 11:31:02.331774  595895 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0929 11:31:02.452417  595895 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0929 11:31:03.277600  595895 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0929 11:31:03.277709  595895 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0929 11:31:03.337296  595895 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0929 11:31:03.576740  595895 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0929 11:31:03.754957  595895 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0929 11:31:04.028596  595895 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0929 11:31:04.458901  595895 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0929 11:31:04.459731  595895 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0929 11:31:04.461956  595895 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0929 11:31:04.463895  595895 out.go:252]   - Booting up control plane ...
	I0929 11:31:04.464031  595895 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0929 11:31:04.464116  595895 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0929 11:31:04.464220  595895 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0929 11:31:04.482430  595895 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0929 11:31:04.482595  595895 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0929 11:31:04.490659  595895 kubeadm.go:310] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0929 11:31:04.490827  595895 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0929 11:31:04.490920  595895 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0929 11:31:04.666361  595895 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0929 11:31:04.666495  595895 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0929 11:31:05.175870  595895 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 510.006022ms
	I0929 11:31:05.187944  595895 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0929 11:31:05.188057  595895 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.39.76:8443/livez
	I0929 11:31:05.188256  595895 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0929 11:31:05.188362  595895 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0929 11:31:07.767053  595895 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 2.579446651s
	I0929 11:31:09.215755  595895 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 4.029766048s
	I0929 11:31:11.189186  595895 kubeadm.go:310] [control-plane-check] kube-apiserver is healthy after 6.002998119s
	I0929 11:31:11.214239  595895 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0929 11:31:11.232892  595895 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0929 11:31:11.255389  595895 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0929 11:31:11.255580  595895 kubeadm.go:310] [mark-control-plane] Marking the node addons-214441 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0929 11:31:11.270844  595895 kubeadm.go:310] [bootstrap-token] Using token: 7wgemt.sdnt4jx2dgy9ll51
	I0929 11:31:11.272442  595895 out.go:252]   - Configuring RBAC rules ...
	I0929 11:31:11.272557  595895 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0929 11:31:11.279364  595895 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0929 11:31:11.294463  595895 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0929 11:31:11.298793  595895 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0929 11:31:11.306582  595895 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0929 11:31:11.323727  595895 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0929 11:31:11.601710  595895 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0929 11:31:12.069553  595895 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0929 11:31:12.597044  595895 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0929 11:31:12.597931  595895 kubeadm.go:310] 
	I0929 11:31:12.598017  595895 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0929 11:31:12.598026  595895 kubeadm.go:310] 
	I0929 11:31:12.598142  595895 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0929 11:31:12.598153  595895 kubeadm.go:310] 
	I0929 11:31:12.598181  595895 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0929 11:31:12.598281  595895 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0929 11:31:12.598374  595895 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0929 11:31:12.598390  595895 kubeadm.go:310] 
	I0929 11:31:12.598436  595895 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0929 11:31:12.598442  595895 kubeadm.go:310] 
	I0929 11:31:12.598481  595895 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0929 11:31:12.598497  595895 kubeadm.go:310] 
	I0929 11:31:12.598577  595895 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0929 11:31:12.598692  595895 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0929 11:31:12.598809  595895 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0929 11:31:12.598828  595895 kubeadm.go:310] 
	I0929 11:31:12.598937  595895 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0929 11:31:12.599041  595895 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0929 11:31:12.599055  595895 kubeadm.go:310] 
	I0929 11:31:12.599196  595895 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 7wgemt.sdnt4jx2dgy9ll51 \
	I0929 11:31:12.599332  595895 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f2c60916dfe453d33b3c30594ac6ac02ea520a59a4bdac7119cd9e587175e1eb \
	I0929 11:31:12.599365  595895 kubeadm.go:310] 	--control-plane 
	I0929 11:31:12.599397  595895 kubeadm.go:310] 
	I0929 11:31:12.599486  595895 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0929 11:31:12.599496  595895 kubeadm.go:310] 
	I0929 11:31:12.599568  595895 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 7wgemt.sdnt4jx2dgy9ll51 \
	I0929 11:31:12.599705  595895 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f2c60916dfe453d33b3c30594ac6ac02ea520a59a4bdac7119cd9e587175e1eb 
	I0929 11:31:12.601217  595895 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0929 11:31:12.601272  595895 cni.go:84] Creating CNI manager for ""
	I0929 11:31:12.601305  595895 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0929 11:31:12.603223  595895 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I0929 11:31:12.604766  595895 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0929 11:31:12.618554  595895 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0929 11:31:12.641768  595895 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0929 11:31:12.641942  595895 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 11:31:12.641954  595895 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-214441 minikube.k8s.io/updated_at=2025_09_29T11_31_12_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=b8c533eb3dce8338d3d4e7231ea97d1c44ed6f81 minikube.k8s.io/name=addons-214441 minikube.k8s.io/primary=true
	I0929 11:31:12.682767  595895 ops.go:34] apiserver oom_adj: -16
	I0929 11:31:12.800130  595895 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 11:31:13.300439  595895 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 11:31:13.800339  595895 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 11:31:14.300644  595895 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 11:31:14.800381  595895 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 11:31:15.301049  595895 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 11:31:15.801207  595895 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 11:31:16.301226  595895 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 11:31:16.801024  595895 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 11:31:17.300849  595895 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 11:31:17.440215  595895 kubeadm.go:1105] duration metric: took 4.798376612s to wait for elevateKubeSystemPrivileges
	I0929 11:31:17.440271  595895 kubeadm.go:394] duration metric: took 17.282308974s to StartCluster
	I0929 11:31:17.440297  595895 settings.go:142] acquiring lock: {Name:mk832bb073af4ae47756dd4494ea087d7aa99c2a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 11:31:17.440448  595895 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21654-591397/kubeconfig
	I0929 11:31:17.441186  595895 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21654-591397/kubeconfig: {Name:mk64b4db01785e3abeedb000f7d1263b1f56db2d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 11:31:17.441409  595895 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0929 11:31:17.441416  595895 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.76 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0929 11:31:17.441496  595895 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0929 11:31:17.441684  595895 config.go:182] Loaded profile config "addons-214441": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0929 11:31:17.441696  595895 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-214441"
	I0929 11:31:17.441708  595895 addons.go:69] Setting yakd=true in profile "addons-214441"
	I0929 11:31:17.441736  595895 addons.go:238] Setting addon yakd=true in "addons-214441"
	I0929 11:31:17.441757  595895 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-214441"
	I0929 11:31:17.441709  595895 addons.go:69] Setting ingress=true in profile "addons-214441"
	I0929 11:31:17.441784  595895 addons.go:238] Setting addon ingress=true in "addons-214441"
	I0929 11:31:17.441793  595895 host.go:66] Checking if "addons-214441" exists ...
	I0929 11:31:17.441803  595895 host.go:66] Checking if "addons-214441" exists ...
	I0929 11:31:17.441799  595895 addons.go:69] Setting default-storageclass=true in profile "addons-214441"
	I0929 11:31:17.441840  595895 addons.go:69] Setting gcp-auth=true in profile "addons-214441"
	I0929 11:31:17.441876  595895 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-214441"
	I0929 11:31:17.441886  595895 mustload.go:65] Loading cluster: addons-214441
	I0929 11:31:17.441893  595895 host.go:66] Checking if "addons-214441" exists ...
	I0929 11:31:17.442145  595895 addons.go:69] Setting registry=true in profile "addons-214441"
	I0929 11:31:17.442160  595895 addons.go:238] Setting addon registry=true in "addons-214441"
	I0929 11:31:17.442191  595895 host.go:66] Checking if "addons-214441" exists ...
	I0929 11:31:17.442280  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:31:17.442300  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:31:17.442353  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:31:17.442366  595895 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-214441"
	I0929 11:31:17.442371  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:31:17.442380  595895 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-214441"
	I0929 11:31:17.442381  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:31:17.442385  595895 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-214441"
	I0929 11:31:17.442396  595895 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-214441"
	I0929 11:31:17.442399  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:31:17.442425  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:31:17.442400  595895 addons.go:69] Setting cloud-spanner=true in profile "addons-214441"
	I0929 11:31:17.442448  595895 addons.go:69] Setting registry-creds=true in profile "addons-214441"
	I0929 11:31:17.442456  595895 addons.go:238] Setting addon cloud-spanner=true in "addons-214441"
	I0929 11:31:17.442469  595895 addons.go:238] Setting addon registry-creds=true in "addons-214441"
	I0929 11:31:17.442478  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:31:17.442491  595895 addons.go:69] Setting storage-provisioner=true in profile "addons-214441"
	I0929 11:31:17.442514  595895 addons.go:238] Setting addon storage-provisioner=true in "addons-214441"
	I0929 11:31:17.442543  595895 host.go:66] Checking if "addons-214441" exists ...
	I0929 11:31:17.442544  595895 addons.go:69] Setting inspektor-gadget=true in profile "addons-214441"
	I0929 11:31:17.442557  595895 addons.go:238] Setting addon inspektor-gadget=true in "addons-214441"
	I0929 11:31:17.442563  595895 addons.go:69] Setting ingress-dns=true in profile "addons-214441"
	I0929 11:31:17.442575  595895 addons.go:238] Setting addon ingress-dns=true in "addons-214441"
	I0929 11:31:17.442588  595895 addons.go:69] Setting metrics-server=true in profile "addons-214441"
	I0929 11:31:17.442591  595895 addons.go:69] Setting volumesnapshots=true in profile "addons-214441"
	I0929 11:31:17.442599  595895 addons.go:238] Setting addon metrics-server=true in "addons-214441"
	I0929 11:31:17.442610  595895 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-214441"
	I0929 11:31:17.442602  595895 config.go:182] Loaded profile config "addons-214441": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0929 11:31:17.442620  595895 addons.go:238] Setting addon volumesnapshots=true in "addons-214441"
	I0929 11:31:17.442622  595895 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-214441"
	I0929 11:31:17.442631  595895 addons.go:69] Setting volcano=true in profile "addons-214441"
	I0929 11:31:17.442647  595895 addons.go:238] Setting addon volcano=true in "addons-214441"
	I0929 11:31:17.442826  595895 host.go:66] Checking if "addons-214441" exists ...
	I0929 11:31:17.442847  595895 host.go:66] Checking if "addons-214441" exists ...
	I0929 11:31:17.442963  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:31:17.443004  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:31:17.443177  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:31:17.443198  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:31:17.443212  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:31:17.443242  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:31:17.443255  595895 host.go:66] Checking if "addons-214441" exists ...
	I0929 11:31:17.443270  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:31:17.443292  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:31:17.443439  595895 out.go:179] * Verifying Kubernetes components...
	I0929 11:31:17.443489  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:31:17.443521  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:31:17.443564  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:31:17.443603  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:31:17.443459  595895 host.go:66] Checking if "addons-214441" exists ...
	I0929 11:31:17.443699  595895 host.go:66] Checking if "addons-214441" exists ...
	I0929 11:31:17.443852  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:31:17.443879  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:31:17.443895  595895 host.go:66] Checking if "addons-214441" exists ...
	I0929 11:31:17.444137  595895 host.go:66] Checking if "addons-214441" exists ...
	I0929 11:31:17.444199  595895 host.go:66] Checking if "addons-214441" exists ...
	I0929 11:31:17.444468  595895 host.go:66] Checking if "addons-214441" exists ...
	I0929 11:31:17.454269  595895 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 11:31:17.455462  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:31:17.455556  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:31:17.457160  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:31:17.457213  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:31:17.458697  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:31:17.458765  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:31:17.459732  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37039
	I0929 11:31:17.459901  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:31:17.459979  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:31:17.460127  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:31:17.460161  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:31:17.460170  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:31:17.460239  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:31:17.460291  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44679
	I0929 11:31:17.460695  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:31:17.463901  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:31:17.463928  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:31:17.464092  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:31:17.465162  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:31:17.465408  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:31:17.466171  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:31:17.466824  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:31:17.467158  595895 main.go:141] libmachine: (addons-214441) Calling .GetState
	I0929 11:31:17.479447  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:31:17.479512  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:31:17.482323  595895 addons.go:238] Setting addon default-storageclass=true in "addons-214441"
	I0929 11:31:17.482391  595895 host.go:66] Checking if "addons-214441" exists ...
	I0929 11:31:17.482773  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:31:17.482798  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:31:17.493064  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45797
	I0929 11:31:17.493710  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40233
	I0929 11:31:17.496980  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:31:17.497697  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:31:17.497723  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:31:17.498583  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:31:17.499544  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:31:17.500891  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:31:17.502188  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:31:17.503325  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:31:17.503345  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:31:17.503676  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33541
	I0929 11:31:17.503826  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:31:17.504644  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:31:17.504730  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:31:17.505209  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:31:17.506256  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:31:17.506279  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:31:17.506340  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36303
	I0929 11:31:17.506984  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46129
	I0929 11:31:17.507294  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:31:17.507677  595895 main.go:141] libmachine: (addons-214441) Calling .GetState
	I0929 11:31:17.507745  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37135
	I0929 11:31:17.508552  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:31:17.509057  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:31:17.509394  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:31:17.509407  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:31:17.509415  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:31:17.510041  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:31:17.510142  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:31:17.510163  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:31:17.511579  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:31:17.513259  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:31:17.513521  595895 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-214441"
	I0929 11:31:17.513538  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46151
	I0929 11:31:17.513575  595895 host.go:66] Checking if "addons-214441" exists ...
	I0929 11:31:17.514124  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:31:17.514166  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:31:17.511927  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:31:17.514352  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:31:17.513596  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42277
	I0929 11:31:17.520718  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:31:17.520752  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44361
	I0929 11:31:17.521039  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:31:17.521092  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33947
	I0929 11:31:17.521207  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43367
	I0929 11:31:17.520724  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37739
	I0929 11:31:17.522317  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:31:17.522444  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:31:17.522469  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:31:17.522507  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:31:17.522852  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:31:17.522920  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:31:17.523211  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:31:17.523225  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:31:17.523306  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:31:17.523461  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:31:17.523473  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:31:17.524082  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:31:17.524376  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:31:17.524523  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:31:17.524535  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:31:17.524631  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:31:17.524746  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34633
	I0929 11:31:17.529249  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:31:17.529354  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:31:17.529387  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:31:17.529766  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:31:17.529766  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:31:17.529799  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:31:17.529807  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:31:17.529908  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:31:17.530061  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:31:17.530343  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:31:17.530371  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:31:17.530465  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:31:17.530878  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:31:17.530932  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:31:17.531382  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:31:17.531639  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:31:17.531658  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:31:17.532124  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:31:17.532483  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:31:17.533015  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:31:17.533033  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:31:17.533472  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:31:17.533508  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:31:17.534270  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:31:17.535229  595895 main.go:141] libmachine: (addons-214441) Calling .GetState
	I0929 11:31:17.535779  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:31:17.535886  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:31:17.537511  595895 host.go:66] Checking if "addons-214441" exists ...
	I0929 11:31:17.538187  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39035
	I0929 11:31:17.539952  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:31:17.540005  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:31:17.540222  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39063
	I0929 11:31:17.540575  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43245
	I0929 11:31:17.540786  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:31:17.540854  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:31:17.540890  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:31:17.541625  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:31:17.541647  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:31:17.542032  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:31:17.542195  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:31:17.542600  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:31:17.543176  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:31:17.543185  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:31:17.543199  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:31:17.543204  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:31:17.543307  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32865
	I0929 11:31:17.544136  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:31:17.544545  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:31:17.544610  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:31:17.544640  595895 main.go:141] libmachine: (addons-214441) Calling .GetState
	I0929 11:31:17.545415  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:31:17.545449  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:31:17.546464  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:31:17.546490  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:31:17.546965  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:31:17.547387  595895 main.go:141] libmachine: (addons-214441) Calling .GetState
	I0929 11:31:17.548714  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:31:17.548795  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:31:17.550669  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38149
	I0929 11:31:17.551412  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:31:17.551773  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34225
	I0929 11:31:17.552171  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:31:17.552255  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:31:17.552199  595895 main.go:141] libmachine: (addons-214441) Calling .DriverName
	I0929 11:31:17.552753  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:31:17.552854  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:31:17.553685  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:31:17.553778  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:31:17.554307  595895 main.go:141] libmachine: (addons-214441) Calling .GetState
	I0929 11:31:17.554514  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:31:17.555149  595895 main.go:141] libmachine: (addons-214441) Calling .GetState
	I0929 11:31:17.557383  595895 main.go:141] libmachine: (addons-214441) Calling .DriverName
	I0929 11:31:17.558025  595895 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0929 11:31:17.559210  595895 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0929 11:31:17.559231  595895 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0929 11:31:17.559262  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHHostname
	I0929 11:31:17.559338  595895 out.go:179]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.12.2
	I0929 11:31:17.560620  595895 main.go:141] libmachine: (addons-214441) Calling .DriverName
	I0929 11:31:17.560681  595895 out.go:179]   - Using image docker.io/volcanosh/vc-controller-manager:v1.12.2
	I0929 11:31:17.560823  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34665
	I0929 11:31:17.561393  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:31:17.562236  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:31:17.562295  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:31:17.562751  595895 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.3
	I0929 11:31:17.563140  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:31:17.563492  595895 out.go:179]   - Using image docker.io/volcanosh/vc-scheduler:v1.12.2
	I0929 11:31:17.564252  595895 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0929 11:31:17.564269  595895 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0929 11:31:17.564289  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:31:17.564293  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHHostname
	I0929 11:31:17.564684  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:31:17.564737  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:31:17.565023  595895 main.go:141] libmachine: (addons-214441) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:9c:d8", ip: ""} in network mk-addons-214441: {Iface:virbr1 ExpiryTime:2025-09-29 12:30:43 +0000 UTC Type:0 Mac:52:54:00:98:9c:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:addons-214441 Clientid:01:52:54:00:98:9c:d8}
	I0929 11:31:17.565146  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined IP address 192.168.39.76 and MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:31:17.567800  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHPort
	I0929 11:31:17.568057  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHKeyPath
	I0929 11:31:17.568262  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36805
	I0929 11:31:17.568522  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHUsername
	I0929 11:31:17.568701  595895 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21654-591397/.minikube/machines/addons-214441/id_rsa Username:docker}
	I0929 11:31:17.569229  595895 addons.go:435] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0929 11:31:17.569253  595895 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (498149 bytes)
	I0929 11:31:17.569273  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHHostname
	I0929 11:31:17.569959  595895 main.go:141] libmachine: (addons-214441) Calling .DriverName
	I0929 11:31:17.570047  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:31:17.572257  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:31:17.572409  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:31:17.572423  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:31:17.573470  595895 main.go:141] libmachine: (addons-214441) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:9c:d8", ip: ""} in network mk-addons-214441: {Iface:virbr1 ExpiryTime:2025-09-29 12:30:43 +0000 UTC Type:0 Mac:52:54:00:98:9c:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:addons-214441 Clientid:01:52:54:00:98:9c:d8}
	I0929 11:31:17.573495  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined IP address 192.168.39.76 and MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:31:17.573534  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42077
	I0929 11:31:17.574161  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:31:17.574166  595895 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I0929 11:31:17.574420  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:31:17.574975  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:31:17.575036  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:31:17.575329  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHPort
	I0929 11:31:17.575415  595895 main.go:141] libmachine: (addons-214441) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:9c:d8", ip: ""} in network mk-addons-214441: {Iface:virbr1 ExpiryTime:2025-09-29 12:30:43 +0000 UTC Type:0 Mac:52:54:00:98:9c:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:addons-214441 Clientid:01:52:54:00:98:9c:d8}
	I0929 11:31:17.575430  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined IP address 192.168.39.76 and MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:31:17.575671  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHKeyPath
	I0929 11:31:17.575865  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHUsername
	I0929 11:31:17.576099  595895 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21654-591397/.minikube/machines/addons-214441/id_rsa Username:docker}
	I0929 11:31:17.577061  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHPort
	I0929 11:31:17.577247  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHKeyPath
	I0929 11:31:17.577378  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHUsername
	I0929 11:31:17.577535  595895 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21654-591397/.minikube/machines/addons-214441/id_rsa Username:docker}
	I0929 11:31:17.577554  595895 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0929 11:31:17.577582  595895 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0929 11:31:17.577605  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHHostname
	I0929 11:31:17.579736  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40239
	I0929 11:31:17.580597  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:31:17.581383  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:31:17.581446  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:31:17.582289  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:31:17.582694  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44947
	I0929 11:31:17.582952  595895 main.go:141] libmachine: (addons-214441) Calling .DriverName
	I0929 11:31:17.583853  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:31:17.585630  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46433
	I0929 11:31:17.585637  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38605
	I0929 11:31:17.586733  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:31:17.586755  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:31:17.586846  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:31:17.587240  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:31:17.587458  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:31:17.587548  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:31:17.587503  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37799
	I0929 11:31:17.588342  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:31:17.588817  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:31:17.588838  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:31:17.589534  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35991
	I0929 11:31:17.589680  595895 main.go:141] libmachine: (addons-214441) Calling .GetState
	I0929 11:31:17.589727  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:31:17.589953  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:31:17.590461  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:31:17.590684  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:31:17.590701  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:31:17.590814  595895 main.go:141] libmachine: (addons-214441) Calling .GetState
	I0929 11:31:17.590864  595895 main.go:141] libmachine: (addons-214441) Calling .GetState
	I0929 11:31:17.591866  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:31:17.592243  595895 main.go:141] libmachine: (addons-214441) Calling .GetState
	I0929 11:31:17.592985  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:31:17.593774  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:31:17.593791  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:31:17.594759  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:31:17.595210  595895 main.go:141] libmachine: (addons-214441) Calling .GetState
	I0929 11:31:17.595390  595895 main.go:141] libmachine: (addons-214441) Calling .DriverName
	I0929 11:31:17.596824  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:31:17.597871  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:31:17.598227  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:31:17.598762  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35305
	I0929 11:31:17.599344  595895 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0929 11:31:17.600871  595895 main.go:141] libmachine: (addons-214441) Calling .DriverName
	I0929 11:31:17.600871  595895 main.go:141] libmachine: (addons-214441) Calling .DriverName
	I0929 11:31:17.600871  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:31:17.600928  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:31:17.600961  595895 main.go:141] libmachine: (addons-214441) Calling .DriverName
	I0929 11:31:17.600994  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39573
	I0929 11:31:17.601002  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43843
	I0929 11:31:17.601641  595895 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0929 11:31:17.601827  595895 main.go:141] libmachine: (addons-214441) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:9c:d8", ip: ""} in network mk-addons-214441: {Iface:virbr1 ExpiryTime:2025-09-29 12:30:43 +0000 UTC Type:0 Mac:52:54:00:98:9c:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:addons-214441 Clientid:01:52:54:00:98:9c:d8}
	I0929 11:31:17.601850  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined IP address 192.168.39.76 and MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:31:17.601913  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:31:17.602052  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:31:17.602151  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39993
	I0929 11:31:17.602155  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:31:17.602306  595895 main.go:141] libmachine: (addons-214441) Calling .GetState
	I0929 11:31:17.602590  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:31:17.602610  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:31:17.602811  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHPort
	I0929 11:31:17.602977  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHKeyPath
	I0929 11:31:17.603038  595895 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0929 11:31:17.603089  595895 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0929 11:31:17.603260  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHUsername
	I0929 11:31:17.603328  595895 main.go:141] libmachine: (addons-214441) Calling .DriverName
	I0929 11:31:17.603564  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:31:17.603593  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:31:17.603752  595895 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21654-591397/.minikube/machines/addons-214441/id_rsa Username:docker}
	I0929 11:31:17.604258  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:31:17.604320  595895 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I0929 11:31:17.604825  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:31:17.604525  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:31:17.605686  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:31:17.605694  595895 main.go:141] libmachine: (addons-214441) Calling .DriverName
	I0929 11:31:17.604846  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:31:17.604946  595895 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0929 11:31:17.605125  595895 main.go:141] libmachine: (addons-214441) Calling .GetState
	I0929 11:31:17.606062  595895 main.go:141] libmachine: (addons-214441) Calling .GetState
	I0929 11:31:17.606154  595895 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.44.1
	I0929 11:31:17.606169  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:31:17.606174  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:31:17.607283  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:31:17.607459  595895 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0929 11:31:17.607513  595895 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0929 11:31:17.608000  595895 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I0929 11:31:17.608022  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHHostname
	I0929 11:31:17.607722  595895 main.go:141] libmachine: (addons-214441) Calling .GetState
	I0929 11:31:17.607825  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:31:17.608327  595895 out.go:179]   - Using image docker.io/busybox:stable
	I0929 11:31:17.608504  595895 main.go:141] libmachine: (addons-214441) Calling .GetState
	I0929 11:31:17.609208  595895 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0929 11:31:17.609380  595895 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I0929 11:31:17.609617  595895 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I0929 11:31:17.609695  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHHostname
	I0929 11:31:17.609885  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35811
	I0929 11:31:17.610214  595895 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0929 11:31:17.610480  595895 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0929 11:31:17.610442  595895 main.go:141] libmachine: (addons-214441) Calling .DriverName
	I0929 11:31:17.610634  595895 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 11:31:17.610651  595895 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0929 11:31:17.610666  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHHostname
	I0929 11:31:17.610637  595895 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0929 11:31:17.610551  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHHostname
	I0929 11:31:17.611056  595895 main.go:141] libmachine: (addons-214441) Calling .DriverName
	I0929 11:31:17.611127  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:31:17.611242  595895 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.2
	I0929 11:31:17.612177  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:31:17.612200  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:31:17.612367  595895 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I0929 11:31:17.612539  595895 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.41
	I0929 11:31:17.612558  595895 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0929 11:31:17.612574  595895 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0929 11:31:17.612702  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHHostname
	I0929 11:31:17.612652  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:31:17.613066  595895 main.go:141] libmachine: (addons-214441) Calling .DriverName
	I0929 11:31:17.613132  595895 main.go:141] libmachine: (addons-214441) Calling .GetState
	I0929 11:31:17.613978  595895 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I0929 11:31:17.614058  595895 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0929 11:31:17.614157  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHHostname
	I0929 11:31:17.614015  595895 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I0929 11:31:17.614286  595895 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I0929 11:31:17.614314  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHHostname
	I0929 11:31:17.614339  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40205
	I0929 11:31:17.614532  595895 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0929 11:31:17.614774  595895 main.go:141] libmachine: (addons-214441) Calling .DriverName
	I0929 11:31:17.614918  595895 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0929 11:31:17.615384  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:31:17.615994  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:31:17.616036  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:31:17.616065  595895 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I0929 11:31:17.616139  595895 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0929 11:31:17.616150  595895 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0929 11:31:17.616217  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHHostname
	I0929 11:31:17.616451  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:31:17.616766  595895 main.go:141] libmachine: (addons-214441) Calling .GetState
	I0929 11:31:17.617254  595895 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0929 11:31:17.618390  595895 main.go:141] libmachine: (addons-214441) Calling .DriverName
	I0929 11:31:17.618595  595895 out.go:179]   - Using image docker.io/registry:3.0.0
	I0929 11:31:17.619658  595895 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I0929 11:31:17.619715  595895 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I0929 11:31:17.619728  595895 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0929 11:31:17.619752  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHHostname
	I0929 11:31:17.619788  595895 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0929 11:31:17.620191  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:31:17.620909  595895 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0929 11:31:17.620926  595895 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I0929 11:31:17.621015  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHHostname
	I0929 11:31:17.621216  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:31:17.622235  595895 main.go:141] libmachine: (addons-214441) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:9c:d8", ip: ""} in network mk-addons-214441: {Iface:virbr1 ExpiryTime:2025-09-29 12:30:43 +0000 UTC Type:0 Mac:52:54:00:98:9c:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:addons-214441 Clientid:01:52:54:00:98:9c:d8}
	I0929 11:31:17.622260  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined IP address 192.168.39.76 and MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:31:17.622296  595895 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0929 11:31:17.622987  595895 main.go:141] libmachine: (addons-214441) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:9c:d8", ip: ""} in network mk-addons-214441: {Iface:virbr1 ExpiryTime:2025-09-29 12:30:43 +0000 UTC Type:0 Mac:52:54:00:98:9c:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:addons-214441 Clientid:01:52:54:00:98:9c:d8}
	I0929 11:31:17.623010  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined IP address 192.168.39.76 and MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:31:17.623146  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHPort
	I0929 11:31:17.623384  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHKeyPath
	I0929 11:31:17.623851  595895 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0929 11:31:17.623870  595895 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0929 11:31:17.623891  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHHostname
	I0929 11:31:17.623910  595895 main.go:141] libmachine: (addons-214441) Calling .DriverName
	I0929 11:31:17.623977  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHUsername
	I0929 11:31:17.623991  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:31:17.624284  595895 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0929 11:31:17.624300  595895 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0929 11:31:17.624317  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHHostname
	I0929 11:31:17.624324  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHPort
	I0929 11:31:17.624330  595895 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21654-591397/.minikube/machines/addons-214441/id_rsa Username:docker}
	I0929 11:31:17.624655  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:31:17.624690  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHKeyPath
	I0929 11:31:17.625088  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHUsername
	I0929 11:31:17.625297  595895 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21654-591397/.minikube/machines/addons-214441/id_rsa Username:docker}
	I0929 11:31:17.626099  595895 main.go:141] libmachine: (addons-214441) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:9c:d8", ip: ""} in network mk-addons-214441: {Iface:virbr1 ExpiryTime:2025-09-29 12:30:43 +0000 UTC Type:0 Mac:52:54:00:98:9c:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:addons-214441 Clientid:01:52:54:00:98:9c:d8}
	I0929 11:31:17.626182  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined IP address 192.168.39.76 and MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:31:17.626247  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:31:17.626251  595895 main.go:141] libmachine: (addons-214441) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:9c:d8", ip: ""} in network mk-addons-214441: {Iface:virbr1 ExpiryTime:2025-09-29 12:30:43 +0000 UTC Type:0 Mac:52:54:00:98:9c:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:addons-214441 Clientid:01:52:54:00:98:9c:d8}
	I0929 11:31:17.626597  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined IP address 192.168.39.76 and MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:31:17.626789  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:31:17.626890  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHPort
	I0929 11:31:17.627091  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHPort
	I0929 11:31:17.627238  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHKeyPath
	I0929 11:31:17.627284  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHKeyPath
	I0929 11:31:17.627374  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHUsername
	I0929 11:31:17.627541  595895 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21654-591397/.minikube/machines/addons-214441/id_rsa Username:docker}
	I0929 11:31:17.627907  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:31:17.627938  595895 main.go:141] libmachine: (addons-214441) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:9c:d8", ip: ""} in network mk-addons-214441: {Iface:virbr1 ExpiryTime:2025-09-29 12:30:43 +0000 UTC Type:0 Mac:52:54:00:98:9c:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:addons-214441 Clientid:01:52:54:00:98:9c:d8}
	I0929 11:31:17.627949  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined IP address 192.168.39.76 and MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:31:17.627979  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHUsername
	I0929 11:31:17.628066  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:31:17.628081  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHPort
	I0929 11:31:17.628268  595895 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21654-591397/.minikube/machines/addons-214441/id_rsa Username:docker}
	I0929 11:31:17.628308  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHKeyPath
	I0929 11:31:17.628533  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHUsername
	I0929 11:31:17.628572  595895 main.go:141] libmachine: (addons-214441) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:9c:d8", ip: ""} in network mk-addons-214441: {Iface:virbr1 ExpiryTime:2025-09-29 12:30:43 +0000 UTC Type:0 Mac:52:54:00:98:9c:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:addons-214441 Clientid:01:52:54:00:98:9c:d8}
	I0929 11:31:17.628735  595895 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21654-591397/.minikube/machines/addons-214441/id_rsa Username:docker}
	I0929 11:31:17.628848  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined IP address 192.168.39.76 and MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:31:17.629214  595895 main.go:141] libmachine: (addons-214441) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:9c:d8", ip: ""} in network mk-addons-214441: {Iface:virbr1 ExpiryTime:2025-09-29 12:30:43 +0000 UTC Type:0 Mac:52:54:00:98:9c:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:addons-214441 Clientid:01:52:54:00:98:9c:d8}
	I0929 11:31:17.629249  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined IP address 192.168.39.76 and MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:31:17.629266  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHPort
	I0929 11:31:17.629512  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHKeyPath
	I0929 11:31:17.629592  595895 main.go:141] libmachine: (addons-214441) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:9c:d8", ip: ""} in network mk-addons-214441: {Iface:virbr1 ExpiryTime:2025-09-29 12:30:43 +0000 UTC Type:0 Mac:52:54:00:98:9c:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:addons-214441 Clientid:01:52:54:00:98:9c:d8}
	I0929 11:31:17.629606  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined IP address 192.168.39.76 and MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:31:17.629764  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHUsername
	I0929 11:31:17.629861  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHPort
	I0929 11:31:17.630008  595895 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21654-591397/.minikube/machines/addons-214441/id_rsa Username:docker}
	I0929 11:31:17.630062  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHKeyPath
	I0929 11:31:17.630142  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHPort
	I0929 11:31:17.630197  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHUsername
	I0929 11:31:17.630311  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHKeyPath
	I0929 11:31:17.630370  595895 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21654-591397/.minikube/machines/addons-214441/id_rsa Username:docker}
	I0929 11:31:17.630910  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHUsername
	I0929 11:31:17.631305  595895 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21654-591397/.minikube/machines/addons-214441/id_rsa Username:docker}
	I0929 11:31:17.631821  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:31:17.632272  595895 main.go:141] libmachine: (addons-214441) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:9c:d8", ip: ""} in network mk-addons-214441: {Iface:virbr1 ExpiryTime:2025-09-29 12:30:43 +0000 UTC Type:0 Mac:52:54:00:98:9c:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:addons-214441 Clientid:01:52:54:00:98:9c:d8}
	I0929 11:31:17.632296  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined IP address 192.168.39.76 and MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:31:17.632442  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:31:17.632503  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHPort
	I0929 11:31:17.632710  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:31:17.632789  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHKeyPath
	I0929 11:31:17.633084  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHUsername
	I0929 11:31:17.633162  595895 main.go:141] libmachine: (addons-214441) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:9c:d8", ip: ""} in network mk-addons-214441: {Iface:virbr1 ExpiryTime:2025-09-29 12:30:43 +0000 UTC Type:0 Mac:52:54:00:98:9c:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:addons-214441 Clientid:01:52:54:00:98:9c:d8}
	I0929 11:31:17.633176  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined IP address 192.168.39.76 and MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:31:17.633207  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:31:17.633228  595895 main.go:141] libmachine: (addons-214441) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:9c:d8", ip: ""} in network mk-addons-214441: {Iface:virbr1 ExpiryTime:2025-09-29 12:30:43 +0000 UTC Type:0 Mac:52:54:00:98:9c:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:addons-214441 Clientid:01:52:54:00:98:9c:d8}
	I0929 11:31:17.633242  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined IP address 192.168.39.76 and MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:31:17.633391  595895 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21654-591397/.minikube/machines/addons-214441/id_rsa Username:docker}
	I0929 11:31:17.633435  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHPort
	I0929 11:31:17.633557  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHPort
	I0929 11:31:17.633619  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHKeyPath
	I0929 11:31:17.633759  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHKeyPath
	I0929 11:31:17.633793  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHUsername
	I0929 11:31:17.634042  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHUsername
	I0929 11:31:17.634042  595895 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21654-591397/.minikube/machines/addons-214441/id_rsa Username:docker}
	I0929 11:31:17.634131  595895 main.go:141] libmachine: (addons-214441) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:9c:d8", ip: ""} in network mk-addons-214441: {Iface:virbr1 ExpiryTime:2025-09-29 12:30:43 +0000 UTC Type:0 Mac:52:54:00:98:9c:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:addons-214441 Clientid:01:52:54:00:98:9c:d8}
	I0929 11:31:17.634164  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined IP address 192.168.39.76 and MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:31:17.634219  595895 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21654-591397/.minikube/machines/addons-214441/id_rsa Username:docker}
	I0929 11:31:17.634716  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHPort
	I0929 11:31:17.634894  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHKeyPath
	I0929 11:31:17.635088  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHUsername
	I0929 11:31:17.635265  595895 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21654-591397/.minikube/machines/addons-214441/id_rsa Username:docker}
	W0929 11:31:17.919750  595895 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:44698->192.168.39.76:22: read: connection reset by peer
	I0929 11:31:17.919798  595895 retry.go:31] will retry after 127.603101ms: ssh: handshake failed: read tcp 192.168.39.1:44698->192.168.39.76:22: read: connection reset by peer
	W0929 11:31:17.927998  595895 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:44716->192.168.39.76:22: read: connection reset by peer
	I0929 11:31:17.928034  595895 retry.go:31] will retry after 352.316454ms: ssh: handshake failed: read tcp 192.168.39.1:44716->192.168.39.76:22: read: connection reset by peer
	I0929 11:31:18.834850  595895 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0929 11:31:18.834892  595895 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0929 11:31:18.867206  595895 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0929 11:31:18.867237  595895 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0929 11:31:18.998018  595895 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0929 11:31:19.019969  595895 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (1.57851512s)
	I0929 11:31:19.019988  595895 ssh_runner.go:235] Completed: sudo systemctl daemon-reload: (1.56567428s)
	I0929 11:31:19.020058  595895 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0929 11:31:19.020195  595895 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0929 11:31:19.047383  595895 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0929 11:31:19.178551  595895 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0929 11:31:19.194460  595895 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0929 11:31:19.203493  595895 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 11:31:19.224634  595895 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0929 11:31:19.236908  595895 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I0929 11:31:19.236937  595895 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I0929 11:31:19.339094  595895 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0929 11:31:19.470368  595895 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0929 11:31:19.470407  595895 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0929 11:31:19.482955  595895 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0929 11:31:19.507279  595895 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0929 11:31:19.533452  595895 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0929 11:31:19.533481  595895 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0929 11:31:19.580275  595895 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0929 11:31:19.580310  595895 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0929 11:31:19.612191  595895 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0929 11:31:19.612228  595895 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0929 11:31:19.656222  595895 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I0929 11:31:19.656250  595895 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0929 11:31:19.707608  595895 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I0929 11:31:19.720943  595895 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 11:31:19.949642  595895 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0929 11:31:19.949675  595895 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0929 11:31:20.010236  595895 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0929 11:31:20.010269  595895 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0929 11:31:20.143152  595895 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0929 11:31:20.143179  595895 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0929 11:31:20.164194  595895 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0929 11:31:20.164223  595895 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0929 11:31:20.178619  595895 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0929 11:31:20.178652  595895 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0929 11:31:20.352326  595895 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0929 11:31:20.352354  595895 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0929 11:31:20.399905  595895 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0929 11:31:20.399935  595895 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0929 11:31:20.528800  595895 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0929 11:31:20.554026  595895 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0929 11:31:20.608085  595895 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0929 11:31:20.608132  595895 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0929 11:31:20.855879  595895 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0929 11:31:20.901072  595895 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0929 11:31:20.901124  595895 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0929 11:31:21.046874  595895 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0929 11:31:21.046903  595895 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0929 11:31:21.279957  595895 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0929 11:31:21.279985  595895 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0929 11:31:21.494633  595895 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0929 11:31:21.494662  595895 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0929 11:31:21.896279  595895 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0929 11:31:22.355612  595895 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0929 11:31:22.355644  595895 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0929 11:31:23.136046  595895 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0929 11:31:23.136083  595895 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0929 11:31:23.742895  595895 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0929 11:31:23.742921  595895 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0929 11:31:24.397559  595895 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0929 11:31:24.397588  595895 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0929 11:31:24.806696  595895 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0929 11:31:24.806729  595895 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0929 11:31:25.028630  595895 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0929 11:31:25.028675  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHHostname
	I0929 11:31:25.032868  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:31:25.033494  595895 main.go:141] libmachine: (addons-214441) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:9c:d8", ip: ""} in network mk-addons-214441: {Iface:virbr1 ExpiryTime:2025-09-29 12:30:43 +0000 UTC Type:0 Mac:52:54:00:98:9c:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:addons-214441 Clientid:01:52:54:00:98:9c:d8}
	I0929 11:31:25.033526  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined IP address 192.168.39.76 and MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:31:25.033760  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHPort
	I0929 11:31:25.034027  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHKeyPath
	I0929 11:31:25.034259  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHUsername
	I0929 11:31:25.034422  595895 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21654-591397/.minikube/machines/addons-214441/id_rsa Username:docker}
	I0929 11:31:25.610330  595895 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0929 11:31:25.954809  595895 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0929 11:31:26.260607  595895 addons.go:238] Setting addon gcp-auth=true in "addons-214441"
	I0929 11:31:26.260695  595895 host.go:66] Checking if "addons-214441" exists ...
	I0929 11:31:26.261024  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:31:26.261068  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:31:26.276135  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36421
	I0929 11:31:26.276726  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:31:26.277323  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:31:26.277354  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:31:26.277924  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:31:26.278456  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:31:26.278490  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:31:26.293277  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40243
	I0929 11:31:26.293786  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:31:26.294319  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:31:26.294344  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:31:26.294858  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:31:26.295136  595895 main.go:141] libmachine: (addons-214441) Calling .GetState
	I0929 11:31:26.297279  595895 main.go:141] libmachine: (addons-214441) Calling .DriverName
	I0929 11:31:26.297583  595895 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0929 11:31:26.297612  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHHostname
	I0929 11:31:26.301409  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:31:26.302065  595895 main.go:141] libmachine: (addons-214441) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:9c:d8", ip: ""} in network mk-addons-214441: {Iface:virbr1 ExpiryTime:2025-09-29 12:30:43 +0000 UTC Type:0 Mac:52:54:00:98:9c:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:addons-214441 Clientid:01:52:54:00:98:9c:d8}
	I0929 11:31:26.302093  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined IP address 192.168.39.76 and MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:31:26.302272  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHPort
	I0929 11:31:26.302474  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHKeyPath
	I0929 11:31:26.302636  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHUsername
	I0929 11:31:26.302830  595895 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21654-591397/.minikube/machines/addons-214441/id_rsa Username:docker}
	I0929 11:31:26.648618  595895 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.65053686s)
	I0929 11:31:26.648643  595895 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (7.628556534s)
	I0929 11:31:26.648693  595895 main.go:141] libmachine: Making call to close driver server
	I0929 11:31:26.648703  595895 main.go:141] libmachine: (addons-214441) Calling .Close
	I0929 11:31:26.648707  595895 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (7.628486823s)
	I0929 11:31:26.648740  595895 start.go:976] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0929 11:31:26.648855  595895 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (7.601423652s)
	I0929 11:31:26.648889  595895 main.go:141] libmachine: Making call to close driver server
	I0929 11:31:26.648898  595895 main.go:141] libmachine: (addons-214441) Calling .Close
	I0929 11:31:26.649041  595895 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:31:26.649056  595895 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:31:26.649066  595895 main.go:141] libmachine: Making call to close driver server
	I0929 11:31:26.649073  595895 main.go:141] libmachine: (addons-214441) Calling .Close
	I0929 11:31:26.649181  595895 main.go:141] libmachine: (addons-214441) DBG | Closing plugin on server side
	I0929 11:31:26.649214  595895 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:31:26.649225  595895 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:31:26.649256  595895 main.go:141] libmachine: Making call to close driver server
	I0929 11:31:26.649265  595895 main.go:141] libmachine: (addons-214441) Calling .Close
	I0929 11:31:26.649555  595895 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:31:26.649585  595895 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:31:26.649698  595895 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:31:26.649728  595895 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:31:26.649741  595895 node_ready.go:35] waiting up to 6m0s for node "addons-214441" to be "Ready" ...
	I0929 11:31:26.649625  595895 main.go:141] libmachine: (addons-214441) DBG | Closing plugin on server side
	I0929 11:31:26.649665  595895 main.go:141] libmachine: (addons-214441) DBG | Closing plugin on server side
	I0929 11:31:26.797678  595895 node_ready.go:49] node "addons-214441" is "Ready"
	I0929 11:31:26.797712  595895 node_ready.go:38] duration metric: took 147.94134ms for node "addons-214441" to be "Ready" ...
	I0929 11:31:26.797735  595895 api_server.go:52] waiting for apiserver process to appear ...
	I0929 11:31:26.797797  595895 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0929 11:31:27.078868  595895 main.go:141] libmachine: Making call to close driver server
	I0929 11:31:27.078896  595895 main.go:141] libmachine: (addons-214441) Calling .Close
	I0929 11:31:27.079284  595895 main.go:141] libmachine: (addons-214441) DBG | Closing plugin on server side
	I0929 11:31:27.079351  595895 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:31:27.079372  595895 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:31:27.220384  595895 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-214441" context rescaled to 1 replicas
	I0929 11:31:30.522194  595895 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (11.34358993s)
	I0929 11:31:30.522263  595895 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (11.327765304s)
	I0929 11:31:30.522284  595895 main.go:141] libmachine: Making call to close driver server
	I0929 11:31:30.522297  595895 main.go:141] libmachine: (addons-214441) Calling .Close
	I0929 11:31:30.522297  595895 main.go:141] libmachine: Making call to close driver server
	I0929 11:31:30.522308  595895 main.go:141] libmachine: (addons-214441) Calling .Close
	I0929 11:31:30.522336  595895 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (11.318803941s)
	I0929 11:31:30.522386  595895 main.go:141] libmachine: Making call to close driver server
	I0929 11:31:30.522398  595895 main.go:141] libmachine: (addons-214441) Calling .Close
	I0929 11:31:30.522641  595895 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:31:30.522658  595895 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:31:30.522685  595895 main.go:141] libmachine: Making call to close driver server
	I0929 11:31:30.522695  595895 main.go:141] libmachine: (addons-214441) Calling .Close
	I0929 11:31:30.522794  595895 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:31:30.522804  595895 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:31:30.522813  595895 main.go:141] libmachine: Making call to close driver server
	I0929 11:31:30.522819  595895 main.go:141] libmachine: (addons-214441) Calling .Close
	I0929 11:31:30.522874  595895 main.go:141] libmachine: (addons-214441) DBG | Closing plugin on server side
	I0929 11:31:30.522863  595895 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:31:30.522905  595895 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:31:30.522914  595895 main.go:141] libmachine: Making call to close driver server
	I0929 11:31:30.522922  595895 main.go:141] libmachine: (addons-214441) Calling .Close
	I0929 11:31:30.522952  595895 main.go:141] libmachine: (addons-214441) DBG | Closing plugin on server side
	I0929 11:31:30.522984  595895 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:31:30.522990  595895 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:31:30.523183  595895 main.go:141] libmachine: (addons-214441) DBG | Closing plugin on server side
	I0929 11:31:30.523188  595895 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:31:30.523205  595895 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:31:30.523212  595895 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:31:30.523216  595895 addons.go:479] Verifying addon ingress=true in "addons-214441"
	I0929 11:31:30.523222  595895 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:31:30.527182  595895 out.go:179] * Verifying ingress addon...
	I0929 11:31:30.529738  595895 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0929 11:31:30.708830  595895 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0929 11:31:30.708859  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:31.235125  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:31.629964  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:32.068126  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:32.586294  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:33.055440  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:33.661344  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:33.865322  595895 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (14.640641229s)
	I0929 11:31:33.865361  595895 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (14.526214451s)
	I0929 11:31:33.865396  595895 main.go:141] libmachine: Making call to close driver server
	I0929 11:31:33.865407  595895 main.go:141] libmachine: Making call to close driver server
	I0929 11:31:33.865413  595895 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (14.382417731s)
	I0929 11:31:33.865425  595895 main.go:141] libmachine: (addons-214441) Calling .Close
	I0929 11:31:33.865448  595895 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (14.358144157s)
	I0929 11:31:33.865456  595895 main.go:141] libmachine: Making call to close driver server
	I0929 11:31:33.865470  595895 main.go:141] libmachine: (addons-214441) Calling .Close
	I0929 11:31:33.865472  595895 main.go:141] libmachine: Making call to close driver server
	I0929 11:31:33.865527  595895 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (14.157883934s)
	I0929 11:31:33.865528  595895 main.go:141] libmachine: (addons-214441) Calling .Close
	I0929 11:31:33.865545  595895 main.go:141] libmachine: Making call to close driver server
	I0929 11:31:33.865554  595895 main.go:141] libmachine: (addons-214441) Calling .Close
	I0929 11:31:33.865410  595895 main.go:141] libmachine: (addons-214441) Calling .Close
	I0929 11:31:33.865659  595895 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (14.144676501s)
	W0929 11:31:33.865707  595895 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:31:33.865740  595895 retry.go:31] will retry after 127.952259ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:31:33.865790  595895 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (13.336965067s)
	I0929 11:31:33.865796  595895 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:31:33.865807  595895 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:31:33.865810  595895 main.go:141] libmachine: Making call to close driver server
	I0929 11:31:33.865818  595895 main.go:141] libmachine: Making call to close driver server
	I0929 11:31:33.865821  595895 main.go:141] libmachine: (addons-214441) Calling .Close
	I0929 11:31:33.865826  595895 main.go:141] libmachine: (addons-214441) Calling .Close
	I0929 11:31:33.865864  595895 main.go:141] libmachine: (addons-214441) DBG | Closing plugin on server side
	I0929 11:31:33.865883  595895 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:31:33.865895  595895 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:31:33.865895  595895 main.go:141] libmachine: (addons-214441) DBG | Closing plugin on server side
	I0929 11:31:33.865906  595895 main.go:141] libmachine: Making call to close driver server
	I0929 11:31:33.865922  595895 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:31:33.865928  595895 main.go:141] libmachine: (addons-214441) Calling .Close
	I0929 11:31:33.865931  595895 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:31:33.865939  595895 main.go:141] libmachine: Making call to close driver server
	I0929 11:31:33.865945  595895 main.go:141] libmachine: (addons-214441) Calling .Close
	I0929 11:31:33.865960  595895 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (13.311901558s)
	I0929 11:31:33.865978  595895 main.go:141] libmachine: Making call to close driver server
	I0929 11:31:33.865986  595895 main.go:141] libmachine: (addons-214441) Calling .Close
	I0929 11:31:33.866077  595895 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (13.010152282s)
	I0929 11:31:33.866096  595895 main.go:141] libmachine: Making call to close driver server
	I0929 11:31:33.866124  595895 main.go:141] libmachine: (addons-214441) Calling .Close
	I0929 11:31:33.866162  595895 main.go:141] libmachine: (addons-214441) DBG | Closing plugin on server side
	I0929 11:31:33.866187  595895 main.go:141] libmachine: (addons-214441) DBG | Closing plugin on server side
	I0929 11:31:33.866214  595895 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:31:33.866223  595895 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:31:33.866230  595895 main.go:141] libmachine: Making call to close driver server
	I0929 11:31:33.866237  595895 main.go:141] libmachine: (addons-214441) Calling .Close
	I0929 11:31:33.866283  595895 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (11.969964695s)
	W0929 11:31:33.866347  595895 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	Warning: unrecognized format "int64"
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0929 11:31:33.866370  595895 retry.go:31] will retry after 213.926415ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	Warning: unrecognized format "int64"
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0929 11:31:33.866587  595895 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:31:33.866618  595895 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:31:33.866622  595895 main.go:141] libmachine: (addons-214441) DBG | Closing plugin on server side
	I0929 11:31:33.866627  595895 main.go:141] libmachine: Making call to close driver server
	I0929 11:31:33.866630  595895 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:31:33.866636  595895 main.go:141] libmachine: (addons-214441) Calling .Close
	I0929 11:31:33.866640  595895 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:31:33.866651  595895 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:31:33.866662  595895 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:31:33.866606  595895 main.go:141] libmachine: (addons-214441) DBG | Closing plugin on server side
	I0929 11:31:33.866736  595895 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:31:33.866752  595895 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:31:33.866766  595895 main.go:141] libmachine: Making call to close driver server
	I0929 11:31:33.866780  595895 main.go:141] libmachine: (addons-214441) Calling .Close
	I0929 11:31:33.866875  595895 main.go:141] libmachine: (addons-214441) DBG | Closing plugin on server side
	I0929 11:31:33.866910  595895 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:31:33.866925  595895 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:31:33.867202  595895 main.go:141] libmachine: (addons-214441) DBG | Closing plugin on server side
	I0929 11:31:33.867264  595895 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:31:33.867284  595895 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:31:33.867303  595895 main.go:141] libmachine: Making call to close driver server
	I0929 11:31:33.867339  595895 main.go:141] libmachine: (addons-214441) Calling .Close
	I0929 11:31:33.867618  595895 main.go:141] libmachine: (addons-214441) DBG | Closing plugin on server side
	I0929 11:31:33.867761  595895 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:31:33.867769  595895 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:31:33.867778  595895 addons.go:479] Verifying addon registry=true in "addons-214441"
	I0929 11:31:33.868269  595895 main.go:141] libmachine: (addons-214441) DBG | Closing plugin on server side
	I0929 11:31:33.868300  595895 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:31:33.868305  595895 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:31:33.868451  595895 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:31:33.868463  595895 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:31:33.868472  595895 main.go:141] libmachine: Making call to close driver server
	I0929 11:31:33.868479  595895 main.go:141] libmachine: (addons-214441) Calling .Close
	I0929 11:31:33.869037  595895 main.go:141] libmachine: (addons-214441) DBG | Closing plugin on server side
	I0929 11:31:33.869070  595895 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:31:33.869076  595895 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:31:33.869084  595895 addons.go:479] Verifying addon metrics-server=true in "addons-214441"
	I0929 11:31:33.869798  595895 main.go:141] libmachine: (addons-214441) DBG | Closing plugin on server side
	I0929 11:31:33.869839  595895 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:31:33.869847  595895 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:31:33.869975  595895 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:31:33.870031  595895 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:31:33.871564  595895 out.go:179] * Verifying registry addon...
	I0929 11:31:33.872479  595895 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-214441 service yakd-dashboard -n yakd-dashboard
	
	I0929 11:31:33.874294  595895 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0929 11:31:33.993863  595895 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0929 11:31:33.993900  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:33.994009  595895 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 11:31:34.081538  595895 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0929 11:31:34.115447  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:34.146570  595895 main.go:141] libmachine: Making call to close driver server
	I0929 11:31:34.146609  595895 main.go:141] libmachine: (addons-214441) Calling .Close
	I0929 11:31:34.146947  595895 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:31:34.146967  595895 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:31:34.413578  595895 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (8.803181451s)
	I0929 11:31:34.413616  595895 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (8.116003731s)
	I0929 11:31:34.413656  595895 main.go:141] libmachine: Making call to close driver server
	I0929 11:31:34.413669  595895 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (7.615843233s)
	I0929 11:31:34.413709  595895 api_server.go:72] duration metric: took 16.972266985s to wait for apiserver process to appear ...
	I0929 11:31:34.413722  595895 api_server.go:88] waiting for apiserver healthz status ...
	I0929 11:31:34.413750  595895 api_server.go:253] Checking apiserver healthz at https://192.168.39.76:8443/healthz ...
	I0929 11:31:34.413675  595895 main.go:141] libmachine: (addons-214441) Calling .Close
	I0929 11:31:34.414213  595895 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:31:34.414230  595895 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:31:34.414254  595895 main.go:141] libmachine: Making call to close driver server
	I0929 11:31:34.414261  595895 main.go:141] libmachine: (addons-214441) Calling .Close
	I0929 11:31:34.414511  595895 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:31:34.414529  595895 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:31:34.414543  595895 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-214441"
	I0929 11:31:34.415286  595895 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I0929 11:31:34.416180  595895 out.go:179] * Verifying csi-hostpath-driver addon...
	I0929 11:31:34.417833  595895 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0929 11:31:34.418933  595895 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0929 11:31:34.419343  595895 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0929 11:31:34.419365  595895 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0929 11:31:34.428017  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:34.435805  595895 api_server.go:279] https://192.168.39.76:8443/healthz returned 200:
	ok
	I0929 11:31:34.443092  595895 api_server.go:141] control plane version: v1.34.0
	I0929 11:31:34.443139  595895 api_server.go:131] duration metric: took 29.409177ms to wait for apiserver health ...
	I0929 11:31:34.443150  595895 system_pods.go:43] waiting for kube-system pods to appear ...
	I0929 11:31:34.495447  595895 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0929 11:31:34.495473  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:34.527406  595895 system_pods.go:59] 20 kube-system pods found
	I0929 11:31:34.527452  595895 system_pods.go:61] "amd-gpu-device-plugin-7jx7f" [97d8a167-36be-44b5-b99c-2b55db99df3e] Running
	I0929 11:31:34.527458  595895 system_pods.go:61] "coredns-66bc5c9577-fkh52" [bd9cc142-39db-4ed9-9ecc-3d270499136c] Running
	I0929 11:31:34.527463  595895 system_pods.go:61] "coredns-66bc5c9577-sgnb2" [343874b8-6c4d-4e36-8ebf-a379e3a93a98] Running
	I0929 11:31:34.527471  595895 system_pods.go:61] "csi-hostpath-attacher-0" [27d9af25-8d41-4c37-9359-c7bd4f88f09f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0929 11:31:34.527475  595895 system_pods.go:61] "csi-hostpath-resizer-0" [4f24e046-d5b2-41ff-a051-b0a572bf9348] Pending
	I0929 11:31:34.527484  595895 system_pods.go:61] "csi-hostpathplugin-8279f" [3cc6db25-521c-4711-9d36-e22ab6d16249] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0929 11:31:34.527490  595895 system_pods.go:61] "etcd-addons-214441" [1710b264-ccec-4054-8a1c-7d8e5ce49163] Running
	I0929 11:31:34.527494  595895 system_pods.go:61] "kube-apiserver-addons-214441" [302fdd61-6c51-4e6e-a1af-7856ccb9f2ca] Running
	I0929 11:31:34.527502  595895 system_pods.go:61] "kube-controller-manager-addons-214441" [3e9c3a44-d14e-4335-a141-f7c648e6159d] Running
	I0929 11:31:34.527507  595895 system_pods.go:61] "kube-ingress-dns-minikube" [12920e9d-debd-4783-9f79-1c3a345e576e] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0929 11:31:34.527513  595895 system_pods.go:61] "kube-proxy-d9fnb" [229af565-3300-4cb4-8289-f3d9b4a9af81] Running
	I0929 11:31:34.527520  595895 system_pods.go:61] "kube-scheduler-addons-214441" [bcaec156-9259-4787-9d02-01a2203b43ac] Running
	I0929 11:31:34.527524  595895 system_pods.go:61] "metrics-server-85b7d694d7-zlrv7" [003bc9b9-be00-4e08-8af7-3443d0502181] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0929 11:31:34.527533  595895 system_pods.go:61] "nvidia-device-plugin-daemonset-x7b8m" [b96ad59d-d30d-4437-a5c7-ce0c5fc69348] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0929 11:31:34.527541  595895 system_pods.go:61] "registry-66898fdd98-d7zx7" [e0282924-ef3d-48eb-8906-ea10f183b39e] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0929 11:31:34.527547  595895 system_pods.go:61] "registry-creds-764b6fb674-td8pw" [c6001777-2f97-49d8-972b-27d373557795] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0929 11:31:34.527557  595895 system_pods.go:61] "registry-proxy-grb7m" [d79830e7-a640-46a7-9b07-38e39aceac96] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0929 11:31:34.527562  595895 system_pods.go:61] "snapshot-controller-7d9fbc56b8-pw4g9" [74b07f42-da97-4ad5-8fdb-748ab5cfbde2] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0929 11:31:34.527571  595895 system_pods.go:61] "snapshot-controller-7d9fbc56b8-wvh2l" [987a4a45-4d77-43ac-816c-ebc08faf51bc] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0929 11:31:34.527575  595895 system_pods.go:61] "storage-provisioner" [e1a79041-edc7-4f8e-96cb-8b3566893a0e] Running
	I0929 11:31:34.527582  595895 system_pods.go:74] duration metric: took 84.42539ms to wait for pod list to return data ...
	I0929 11:31:34.527594  595895 default_sa.go:34] waiting for default service account to be created ...
	I0929 11:31:34.549252  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:34.556947  595895 default_sa.go:45] found service account: "default"
	I0929 11:31:34.556977  595895 default_sa.go:55] duration metric: took 29.376735ms for default service account to be created ...
	I0929 11:31:34.556988  595895 system_pods.go:116] waiting for k8s-apps to be running ...
	I0929 11:31:34.596290  595895 system_pods.go:86] 20 kube-system pods found
	I0929 11:31:34.596322  595895 system_pods.go:89] "amd-gpu-device-plugin-7jx7f" [97d8a167-36be-44b5-b99c-2b55db99df3e] Running
	I0929 11:31:34.596330  595895 system_pods.go:89] "coredns-66bc5c9577-fkh52" [bd9cc142-39db-4ed9-9ecc-3d270499136c] Running
	I0929 11:31:34.596334  595895 system_pods.go:89] "coredns-66bc5c9577-sgnb2" [343874b8-6c4d-4e36-8ebf-a379e3a93a98] Running
	I0929 11:31:34.596343  595895 system_pods.go:89] "csi-hostpath-attacher-0" [27d9af25-8d41-4c37-9359-c7bd4f88f09f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0929 11:31:34.596349  595895 system_pods.go:89] "csi-hostpath-resizer-0" [4f24e046-d5b2-41ff-a051-b0a572bf9348] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0929 11:31:34.596357  595895 system_pods.go:89] "csi-hostpathplugin-8279f" [3cc6db25-521c-4711-9d36-e22ab6d16249] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0929 11:31:34.596361  595895 system_pods.go:89] "etcd-addons-214441" [1710b264-ccec-4054-8a1c-7d8e5ce49163] Running
	I0929 11:31:34.596365  595895 system_pods.go:89] "kube-apiserver-addons-214441" [302fdd61-6c51-4e6e-a1af-7856ccb9f2ca] Running
	I0929 11:31:34.596369  595895 system_pods.go:89] "kube-controller-manager-addons-214441" [3e9c3a44-d14e-4335-a141-f7c648e6159d] Running
	I0929 11:31:34.596375  595895 system_pods.go:89] "kube-ingress-dns-minikube" [12920e9d-debd-4783-9f79-1c3a345e576e] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0929 11:31:34.596381  595895 system_pods.go:89] "kube-proxy-d9fnb" [229af565-3300-4cb4-8289-f3d9b4a9af81] Running
	I0929 11:31:34.596385  595895 system_pods.go:89] "kube-scheduler-addons-214441" [bcaec156-9259-4787-9d02-01a2203b43ac] Running
	I0929 11:31:34.596390  595895 system_pods.go:89] "metrics-server-85b7d694d7-zlrv7" [003bc9b9-be00-4e08-8af7-3443d0502181] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0929 11:31:34.596398  595895 system_pods.go:89] "nvidia-device-plugin-daemonset-x7b8m" [b96ad59d-d30d-4437-a5c7-ce0c5fc69348] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0929 11:31:34.596404  595895 system_pods.go:89] "registry-66898fdd98-d7zx7" [e0282924-ef3d-48eb-8906-ea10f183b39e] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0929 11:31:34.596409  595895 system_pods.go:89] "registry-creds-764b6fb674-td8pw" [c6001777-2f97-49d8-972b-27d373557795] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0929 11:31:34.596413  595895 system_pods.go:89] "registry-proxy-grb7m" [d79830e7-a640-46a7-9b07-38e39aceac96] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0929 11:31:34.596421  595895 system_pods.go:89] "snapshot-controller-7d9fbc56b8-pw4g9" [74b07f42-da97-4ad5-8fdb-748ab5cfbde2] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0929 11:31:34.596427  595895 system_pods.go:89] "snapshot-controller-7d9fbc56b8-wvh2l" [987a4a45-4d77-43ac-816c-ebc08faf51bc] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0929 11:31:34.596430  595895 system_pods.go:89] "storage-provisioner" [e1a79041-edc7-4f8e-96cb-8b3566893a0e] Running
	I0929 11:31:34.596439  595895 system_pods.go:126] duration metric: took 39.444621ms to wait for k8s-apps to be running ...
	I0929 11:31:34.596450  595895 system_svc.go:44] waiting for kubelet service to be running ....
	I0929 11:31:34.596507  595895 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0929 11:31:34.638029  595895 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0929 11:31:34.638063  595895 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0929 11:31:34.896745  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:35.000193  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:35.038316  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:35.057490  595895 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0929 11:31:35.057521  595895 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0929 11:31:35.300242  595895 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0929 11:31:35.379546  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:35.428677  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:35.535091  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:35.881406  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:35.938231  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:36.039311  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:36.382155  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:36.425663  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:36.535684  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:36.886954  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:36.927490  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:37.044975  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:37.382165  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:37.431026  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:37.547302  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:37.920673  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:37.944368  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:38.063651  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:38.330176  595895 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (4.336121933s)
	W0929 11:31:38.330254  595895 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:31:38.330284  595895 retry.go:31] will retry after 312.007159ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:31:38.330290  595895 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.248696545s)
	I0929 11:31:38.330341  595895 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (3.73381029s)
	I0929 11:31:38.330367  595895 system_svc.go:56] duration metric: took 3.733914032s WaitForService to wait for kubelet
	I0929 11:31:38.330377  595895 kubeadm.go:578] duration metric: took 20.888935766s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0929 11:31:38.330403  595895 node_conditions.go:102] verifying NodePressure condition ...
	I0929 11:31:38.330343  595895 main.go:141] libmachine: Making call to close driver server
	I0929 11:31:38.330449  595895 main.go:141] libmachine: (addons-214441) Calling .Close
	I0929 11:31:38.330448  595895 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (3.030164486s)
	I0929 11:31:38.330495  595895 main.go:141] libmachine: Making call to close driver server
	I0929 11:31:38.330509  595895 main.go:141] libmachine: (addons-214441) Calling .Close
	I0929 11:31:38.330817  595895 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:31:38.330832  595895 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:31:38.330841  595895 main.go:141] libmachine: Making call to close driver server
	I0929 11:31:38.330848  595895 main.go:141] libmachine: (addons-214441) Calling .Close
	I0929 11:31:38.330851  595895 main.go:141] libmachine: (addons-214441) DBG | Closing plugin on server side
	I0929 11:31:38.330882  595895 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:31:38.330895  595895 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:31:38.330903  595895 main.go:141] libmachine: Making call to close driver server
	I0929 11:31:38.330910  595895 main.go:141] libmachine: (addons-214441) Calling .Close
	I0929 11:31:38.331221  595895 main.go:141] libmachine: (addons-214441) DBG | Closing plugin on server side
	I0929 11:31:38.331223  595895 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:31:38.331238  595895 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:31:38.331251  595895 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:31:38.331258  595895 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:31:38.332465  595895 addons.go:479] Verifying addon gcp-auth=true in "addons-214441"
	I0929 11:31:38.334695  595895 out.go:179] * Verifying gcp-auth addon...
	I0929 11:31:38.336858  595895 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0929 11:31:38.341614  595895 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0929 11:31:38.341645  595895 node_conditions.go:123] node cpu capacity is 2
	I0929 11:31:38.341662  595895 node_conditions.go:105] duration metric: took 11.25287ms to run NodePressure ...
	I0929 11:31:38.341688  595895 start.go:241] waiting for startup goroutines ...
	I0929 11:31:38.343873  595895 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0929 11:31:38.343896  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:38.381193  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:38.423947  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:38.537472  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:38.642514  595895 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 11:31:38.843272  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:38.944959  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:38.945123  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:39.033029  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:39.342350  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:39.380435  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:39.424230  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:39.537307  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:39.645310  595895 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.002737784s)
	W0929 11:31:39.645357  595895 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:31:39.645385  595895 retry.go:31] will retry after 298.904966ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:31:39.841477  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:39.879072  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:39.922915  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:39.945025  595895 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 11:31:40.034681  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:40.343272  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:40.382403  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:40.422942  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:40.539442  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:40.844610  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:40.879893  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:40.924951  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:41.033826  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:41.124246  595895 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.179166796s)
	W0929 11:31:41.124315  595895 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:31:41.124339  595895 retry.go:31] will retry after 649.538473ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:31:41.343005  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:41.380641  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:41.425734  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:41.533709  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:41.774560  595895 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 11:31:41.841236  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:41.878527  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:41.924650  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:42.035789  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:42.342468  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:42.380731  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:42.426156  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:42.534471  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:42.785912  595895 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.011289133s)
	W0929 11:31:42.785977  595895 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:31:42.786005  595895 retry.go:31] will retry after 983.289132ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:31:42.842132  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:42.879170  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:42.924415  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:43.036251  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:43.343664  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:43.382521  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:43.423598  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:43.534301  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:43.770317  595895 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 11:31:43.843700  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:43.880339  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:43.925260  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:44.035702  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:44.342152  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:44.380186  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:44.427570  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:44.537930  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:44.812756  595895 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.042397237s)
	W0929 11:31:44.812812  595895 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:31:44.812836  595895 retry.go:31] will retry after 2.137947671s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:31:44.843045  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:44.881899  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:44.924762  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:45.035718  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:45.343550  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:45.378897  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:45.424866  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:45.534338  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:45.841433  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:45.877671  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:45.923645  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:46.034379  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:46.372337  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:46.406356  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:46.426866  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:46.534032  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:46.842343  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:46.879578  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:46.925175  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:46.951146  595895 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 11:31:47.034343  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:47.344240  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:47.382773  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:47.424668  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:47.540037  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:47.843427  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:47.879391  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:47.924262  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:47.960092  595895 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.008893629s)
	W0929 11:31:47.960177  595895 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:31:47.960206  595895 retry.go:31] will retry after 2.504757299s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:31:48.033591  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:48.341481  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:48.378697  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:48.424514  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:48.536592  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:48.842185  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:48.879742  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:48.923614  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:49.034098  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:49.340781  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:49.379506  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:49.423231  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:49.534207  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:49.842436  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:49.877896  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:49.924231  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:50.034614  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:50.341556  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:50.379007  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:50.423685  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:50.465827  595895 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 11:31:50.536792  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:50.843824  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:50.879454  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:50.924711  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:51.035609  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:51.343958  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:51.379841  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:51.424239  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:51.468054  595895 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.002171892s)
	W0929 11:31:51.468114  595895 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:31:51.468140  595895 retry.go:31] will retry after 5.613548218s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:31:51.533585  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:51.963029  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:51.963886  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:51.964026  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:52.060713  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:52.343223  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:52.378836  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:52.424767  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:52.534427  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:52.849585  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:52.879670  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:52.948684  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:53.048366  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:53.346453  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:53.380741  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:53.426760  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:53.533978  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:53.840987  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:53.879766  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:53.924223  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:54.035753  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:54.342742  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:54.378763  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:54.423439  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:54.535260  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:54.841859  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:54.880183  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:54.925299  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:55.033854  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:55.340853  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:55.378822  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:55.424172  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:55.534313  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:55.842189  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:55.879647  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:55.925521  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:56.034145  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:56.341524  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:56.384803  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:56.424070  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:56.533658  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:56.845007  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:56.881917  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:56.944166  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:57.044730  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:57.082647  595895 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 11:31:57.345840  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:57.379131  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:57.425387  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:57.534328  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:57.843711  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:57.879327  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:57.925624  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:58.038058  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:58.345139  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:58.379479  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:58.427479  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:58.431242  595895 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.348544969s)
	W0929 11:31:58.431293  595895 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:31:58.431314  595895 retry.go:31] will retry after 5.599503168s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:31:58.535825  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:58.841717  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:58.878293  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:58.926559  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:59.035878  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:59.341486  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:59.381532  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:59.425077  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:59.532752  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:59.841172  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:59.878180  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:59.923096  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:00.034481  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:00.557941  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:00.559858  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:32:00.559963  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:00.560670  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:00.841990  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:00.879357  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:32:00.926097  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:01.036394  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:01.344642  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:01.379875  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:32:01.425784  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:01.534466  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:01.842499  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:01.878243  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:32:01.924047  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:02.033958  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:02.342377  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:02.380154  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:32:02.423813  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:02.535090  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:02.843862  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:02.879556  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:32:02.924521  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:03.034759  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:03.340099  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:03.378625  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:32:03.423534  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:03.534511  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:03.841201  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:03.878471  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:32:03.924393  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:04.031608  595895 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 11:32:04.037031  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:04.344499  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:04.378709  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:32:04.426297  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:04.536239  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:04.842255  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:04.878783  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:32:04.925876  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:05.037628  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:05.250099  595895 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.218439403s)
	W0929 11:32:05.250163  595895 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:32:05.250186  595895 retry.go:31] will retry after 6.3969875s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:32:05.342875  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:05.380683  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:32:05.424490  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:05.534483  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:05.841804  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:05.880284  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:32:05.923385  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:06.034868  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:06.341952  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:06.378384  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:32:06.426408  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:06.535793  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:06.842154  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:06.880699  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:32:06.924358  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:07.035474  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:07.343686  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:07.378323  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:32:07.423762  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:07.535390  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:07.843851  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:07.881716  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:32:07.927684  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:08.037583  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:08.341340  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:08.380517  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:32:08.424488  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:08.535292  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:08.841002  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:08.879020  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:32:08.924253  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:09.089297  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:09.340800  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:09.377819  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:32:09.423823  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:09.534297  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:09.849243  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:09.950172  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:32:09.950267  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:10.036059  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:10.346922  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:10.379976  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:32:10.424634  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:10.538864  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:10.842015  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:10.879192  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:32:10.925328  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:11.040957  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:11.349029  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:11.380885  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:32:11.452716  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:11.533526  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:11.648223  595895 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 11:32:11.846882  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:11.881994  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:32:11.924898  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:12.037323  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:12.342006  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:12.378476  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:32:12.425404  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:12.544040  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:12.792386  595895 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.144111976s)
	W0929 11:32:12.792447  595895 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:32:12.792475  595895 retry.go:31] will retry after 13.411476283s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:32:12.842021  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:12.880179  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:32:12.924788  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:13.040328  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:13.342434  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:13.378229  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:32:13.423792  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:13.533728  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:13.843276  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:13.881114  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:32:13.924958  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:14.034759  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:14.342679  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:14.391569  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:32:14.496903  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:14.537421  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:14.843175  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:14.880166  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:32:14.923743  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:15.033994  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:15.343313  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:15.378881  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:32:15.423448  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:15.538003  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:15.845026  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:15.879663  595895 kapi.go:107] duration metric: took 42.005359357s to wait for kubernetes.io/minikube-addons=registry ...
	I0929 11:32:15.924537  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:16.034645  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:16.341847  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:16.423671  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:16.542699  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:16.844239  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:16.931285  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:17.038278  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:17.353396  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:17.429078  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:17.543634  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:17.844298  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:17.946425  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:18.041877  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:18.345833  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:18.428431  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:18.540908  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:18.840650  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:18.941953  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:19.044517  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:19.341978  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:19.424948  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:19.534807  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:19.839721  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:19.923994  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:20.033049  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:20.342737  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:20.425291  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:20.540624  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:20.844143  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:20.923381  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:21.034820  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:21.343509  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:21.423753  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:21.533929  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:21.841334  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:21.923232  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:22.035002  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:22.630689  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:22.632895  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:22.632941  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:22.845479  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:22.926876  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:23.038229  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:23.355255  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:23.427225  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:23.538625  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:23.844878  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:23.934777  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:24.035280  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:24.346419  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:24.423729  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:24.534589  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:24.842134  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:24.923902  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:25.034892  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:25.362314  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:25.488458  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:25.587385  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:25.861373  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:25.929934  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:26.034355  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:26.204639  595895 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 11:32:26.361386  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:26.429512  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:26.537022  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:26.843446  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:26.926054  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:27.035634  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:27.344336  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:27.424901  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:27.537642  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:27.644135  595895 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.439429306s)
	W0929 11:32:27.644198  595895 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:32:27.644227  595895 retry.go:31] will retry after 29.327619656s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:32:27.842768  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:27.923415  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:28.034767  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:28.343738  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:28.445503  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:28.546159  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:28.851845  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:28.927009  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:29.033400  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:29.341998  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:29.426197  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:29.537012  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:29.842012  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:29.924188  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:30.034037  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:30.346865  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:30.430853  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:30.542769  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:30.842367  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:30.922904  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:31.033768  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:31.341881  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:31.425338  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:31.535963  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:31.844006  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:31.924398  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:32.034705  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:32.346065  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:32.423672  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:32.534377  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:32.842447  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:32.925931  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:33.034800  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:33.387960  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:33.429171  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:33.546901  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:33.852519  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:33.953288  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:34.035154  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:34.344025  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:34.431259  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:34.536600  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:34.843653  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:34.927609  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:35.036794  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:35.341408  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:35.425312  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:35.541227  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:35.847181  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:35.947699  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:36.035760  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:36.344915  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:36.424144  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:36.535593  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:36.841859  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:36.924975  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:37.037919  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:37.452583  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:37.459370  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:37.537236  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:37.841013  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:37.923280  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:38.036969  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:38.340515  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:38.425769  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:38.549235  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:38.842439  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:38.925062  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:39.035751  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:39.341398  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:39.422778  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:39.534951  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:39.841870  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:39.925988  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:40.034408  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:40.340654  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:40.424350  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:40.535075  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:40.843236  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:40.924921  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:41.034406  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:41.497913  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:41.499293  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:41.535243  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:41.844020  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:41.923065  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:42.045660  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:42.342026  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:42.426493  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:42.535570  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:42.841485  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:42.923010  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:43.039027  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:43.346733  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:43.432195  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:43.540145  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:43.885089  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:43.972714  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:44.068027  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:44.345507  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:44.427061  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:44.535862  595895 kapi.go:107] duration metric: took 1m14.00612311s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0929 11:32:44.842493  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:44.929592  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:45.347246  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:45.424028  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:45.841905  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:45.923701  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:46.347078  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:46.425229  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:46.845817  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:46.925006  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:47.341259  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:47.426132  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:47.845143  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:47.924205  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:48.349502  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:48.452604  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:48.846442  595895 kapi.go:107] duration metric: took 1m10.509578031s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0929 11:32:48.847867  595895 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-214441 cluster.
	I0929 11:32:48.849227  595895 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0929 11:32:48.850374  595895 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0929 11:32:48.946549  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:49.426824  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:49.927802  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:50.426120  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:50.925871  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:51.426655  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:51.927170  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:52.426213  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:52.923791  595895 kapi.go:107] duration metric: took 1m18.504852087s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0929 11:32:56.972597  595895 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W0929 11:32:57.723998  595895 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:32:57.724041  595895 retry.go:31] will retry after 18.741816746s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:33:16.468501  595895 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W0929 11:33:17.218683  595895 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:33:17.218783  595895 main.go:141] libmachine: Making call to close driver server
	I0929 11:33:17.218797  595895 main.go:141] libmachine: (addons-214441) Calling .Close
	I0929 11:33:17.219140  595895 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:33:17.219161  595895 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:33:17.219172  595895 main.go:141] libmachine: Making call to close driver server
	I0929 11:33:17.219180  595895 main.go:141] libmachine: (addons-214441) Calling .Close
	I0929 11:33:17.219203  595895 main.go:141] libmachine: (addons-214441) DBG | Closing plugin on server side
	I0929 11:33:17.219480  595895 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:33:17.219502  595895 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:33:17.219534  595895 main.go:141] libmachine: (addons-214441) DBG | Closing plugin on server side
	W0929 11:33:17.219634  595895 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I0929 11:33:17.221637  595895 out.go:179] * Enabled addons: ingress-dns, storage-provisioner-rancher, storage-provisioner, cloud-spanner, volcano, amd-gpu-device-plugin, metrics-server, registry-creds, nvidia-device-plugin, yakd, default-storageclass, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0929 11:33:17.223007  595895 addons.go:514] duration metric: took 1m59.781528816s for enable addons: enabled=[ingress-dns storage-provisioner-rancher storage-provisioner cloud-spanner volcano amd-gpu-device-plugin metrics-server registry-creds nvidia-device-plugin yakd default-storageclass volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0929 11:33:17.223046  595895 start.go:246] waiting for cluster config update ...
	I0929 11:33:17.223066  595895 start.go:255] writing updated cluster config ...
	I0929 11:33:17.223379  595895 ssh_runner.go:195] Run: rm -f paused
	I0929 11:33:17.229885  595895 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0929 11:33:17.234611  595895 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-fkh52" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 11:33:17.240669  595895 pod_ready.go:94] pod "coredns-66bc5c9577-fkh52" is "Ready"
	I0929 11:33:17.240694  595895 pod_ready.go:86] duration metric: took 6.057488ms for pod "coredns-66bc5c9577-fkh52" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 11:33:17.243134  595895 pod_ready.go:83] waiting for pod "etcd-addons-214441" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 11:33:17.248977  595895 pod_ready.go:94] pod "etcd-addons-214441" is "Ready"
	I0929 11:33:17.249003  595895 pod_ready.go:86] duration metric: took 5.848678ms for pod "etcd-addons-214441" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 11:33:17.251694  595895 pod_ready.go:83] waiting for pod "kube-apiserver-addons-214441" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 11:33:17.257270  595895 pod_ready.go:94] pod "kube-apiserver-addons-214441" is "Ready"
	I0929 11:33:17.257299  595895 pod_ready.go:86] duration metric: took 5.583626ms for pod "kube-apiserver-addons-214441" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 11:33:17.259585  595895 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-214441" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 11:33:17.635253  595895 pod_ready.go:94] pod "kube-controller-manager-addons-214441" is "Ready"
	I0929 11:33:17.635287  595895 pod_ready.go:86] duration metric: took 375.675116ms for pod "kube-controller-manager-addons-214441" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 11:33:17.834921  595895 pod_ready.go:83] waiting for pod "kube-proxy-d9fnb" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 11:33:18.234706  595895 pod_ready.go:94] pod "kube-proxy-d9fnb" is "Ready"
	I0929 11:33:18.234735  595895 pod_ready.go:86] duration metric: took 399.786159ms for pod "kube-proxy-d9fnb" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 11:33:18.435590  595895 pod_ready.go:83] waiting for pod "kube-scheduler-addons-214441" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 11:33:18.834304  595895 pod_ready.go:94] pod "kube-scheduler-addons-214441" is "Ready"
	I0929 11:33:18.834340  595895 pod_ready.go:86] duration metric: took 398.719914ms for pod "kube-scheduler-addons-214441" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 11:33:18.834353  595895 pod_ready.go:40] duration metric: took 1.60442513s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0929 11:33:18.881427  595895 start.go:623] kubectl: 1.34.1, cluster: 1.34.0 (minor skew: 0)
	I0929 11:33:18.883901  595895 out.go:179] * Done! kubectl is now configured to use "addons-214441" cluster and "default" namespace by default
	
	
	==> Docker <==
	Sep 29 11:33:42 addons-214441 dockerd[1525]: time="2025-09-29T11:33:42.094739949Z" level=warning msg="reference for unknown type: " digest="sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624" remote="docker.io/marcnuri/yakd@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624"
	Sep 29 11:33:42 addons-214441 dockerd[1525]: time="2025-09-29T11:33:42.137686234Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 11:33:51 addons-214441 dockerd[1525]: time="2025-09-29T11:33:51.071723665Z" level=warning msg="reference for unknown type: " digest="sha256:286112e70bdbf88174a66895bb3c64dd9026b5a762025b61bcd8f6cac04e1b90" remote="docker.io/volcanosh/vc-controller-manager@sha256:286112e70bdbf88174a66895bb3c64dd9026b5a762025b61bcd8f6cac04e1b90"
	Sep 29 11:33:51 addons-214441 dockerd[1525]: time="2025-09-29T11:33:51.111681636Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 11:33:56 addons-214441 dockerd[1525]: time="2025-09-29T11:33:56.071799075Z" level=warning msg="reference for unknown type: " digest="sha256:b7c3bd73e2d9240cf17662451d50e0d73654342235a66cdfb2ec221f1628ae35" remote="docker.io/volcanosh/vc-webhook-manager@sha256:b7c3bd73e2d9240cf17662451d50e0d73654342235a66cdfb2ec221f1628ae35"
	Sep 29 11:33:56 addons-214441 dockerd[1525]: time="2025-09-29T11:33:56.111048013Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 11:34:00 addons-214441 dockerd[1525]: time="2025-09-29T11:34:00.077948336Z" level=warning msg="reference for unknown type: " digest="sha256:6e28f0f79d4cd09c1c34afaba41623c1b4d0fd7087cc99d6449a8a62e073b50e" remote="docker.io/volcanosh/vc-scheduler@sha256:6e28f0f79d4cd09c1c34afaba41623c1b4d0fd7087cc99d6449a8a62e073b50e"
	Sep 29 11:34:00 addons-214441 dockerd[1525]: time="2025-09-29T11:34:00.113868204Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 11:35:10 addons-214441 dockerd[1525]: time="2025-09-29T11:35:10.095864998Z" level=warning msg="reference for unknown type: " digest="sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624" remote="docker.io/marcnuri/yakd@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624"
	Sep 29 11:35:10 addons-214441 dockerd[1525]: time="2025-09-29T11:35:10.137137388Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 11:35:23 addons-214441 dockerd[1525]: time="2025-09-29T11:35:23.068721842Z" level=warning msg="reference for unknown type: " digest="sha256:286112e70bdbf88174a66895bb3c64dd9026b5a762025b61bcd8f6cac04e1b90" remote="docker.io/volcanosh/vc-controller-manager@sha256:286112e70bdbf88174a66895bb3c64dd9026b5a762025b61bcd8f6cac04e1b90"
	Sep 29 11:35:23 addons-214441 dockerd[1525]: time="2025-09-29T11:35:23.103976868Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 11:35:25 addons-214441 dockerd[1525]: time="2025-09-29T11:35:25.063154839Z" level=warning msg="reference for unknown type: " digest="sha256:6e28f0f79d4cd09c1c34afaba41623c1b4d0fd7087cc99d6449a8a62e073b50e" remote="docker.io/volcanosh/vc-scheduler@sha256:6e28f0f79d4cd09c1c34afaba41623c1b4d0fd7087cc99d6449a8a62e073b50e"
	Sep 29 11:35:25 addons-214441 dockerd[1525]: time="2025-09-29T11:35:25.098459479Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 11:35:27 addons-214441 dockerd[1525]: time="2025-09-29T11:35:27.066913877Z" level=warning msg="reference for unknown type: " digest="sha256:b7c3bd73e2d9240cf17662451d50e0d73654342235a66cdfb2ec221f1628ae35" remote="docker.io/volcanosh/vc-webhook-manager@sha256:b7c3bd73e2d9240cf17662451d50e0d73654342235a66cdfb2ec221f1628ae35"
	Sep 29 11:35:27 addons-214441 dockerd[1525]: time="2025-09-29T11:35:27.107790260Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 11:38:00 addons-214441 dockerd[1525]: time="2025-09-29T11:38:00.089034080Z" level=warning msg="reference for unknown type: " digest="sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624" remote="docker.io/marcnuri/yakd@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624"
	Sep 29 11:38:00 addons-214441 dockerd[1525]: time="2025-09-29T11:38:00.206748190Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 11:38:00 addons-214441 cri-dockerd[1389]: time="2025-09-29T11:38:00Z" level=info msg="Stop pulling image docker.io/marcnuri/yakd:0.0.5@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624: docker.io/marcnuri/yakd@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624: Pulling from marcnuri/yakd"
	Sep 29 11:38:07 addons-214441 dockerd[1525]: time="2025-09-29T11:38:07.063975619Z" level=warning msg="reference for unknown type: " digest="sha256:286112e70bdbf88174a66895bb3c64dd9026b5a762025b61bcd8f6cac04e1b90" remote="docker.io/volcanosh/vc-controller-manager@sha256:286112e70bdbf88174a66895bb3c64dd9026b5a762025b61bcd8f6cac04e1b90"
	Sep 29 11:38:07 addons-214441 dockerd[1525]: time="2025-09-29T11:38:07.103031753Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 11:38:13 addons-214441 dockerd[1525]: time="2025-09-29T11:38:13.071513164Z" level=warning msg="reference for unknown type: " digest="sha256:b7c3bd73e2d9240cf17662451d50e0d73654342235a66cdfb2ec221f1628ae35" remote="docker.io/volcanosh/vc-webhook-manager@sha256:b7c3bd73e2d9240cf17662451d50e0d73654342235a66cdfb2ec221f1628ae35"
	Sep 29 11:38:13 addons-214441 dockerd[1525]: time="2025-09-29T11:38:13.112099321Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 11:38:15 addons-214441 dockerd[1525]: time="2025-09-29T11:38:15.065846446Z" level=warning msg="reference for unknown type: " digest="sha256:6e28f0f79d4cd09c1c34afaba41623c1b4d0fd7087cc99d6449a8a62e073b50e" remote="docker.io/volcanosh/vc-scheduler@sha256:6e28f0f79d4cd09c1c34afaba41623c1b4d0fd7087cc99d6449a8a62e073b50e"
	Sep 29 11:38:15 addons-214441 dockerd[1525]: time="2025-09-29T11:38:15.102986155Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	af544573fc0a7       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          6 minutes ago       Running             csi-snapshotter                          0                   02a7d350b8353       csi-hostpathplugin-8279f
	0ce41bd4faa5b       registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8                          6 minutes ago       Running             csi-provisioner                          0                   02a7d350b8353       csi-hostpathplugin-8279f
	a8b5f59d15a16       registry.k8s.io/sig-storage/livenessprobe@sha256:cacee2b5c36dd59d4c7e8469c05c9e4ef53ecb2df9025fa8c10cdaf61bce62f0                            6 minutes ago       Running             liveness-probe                           0                   02a7d350b8353       csi-hostpathplugin-8279f
	31549aa99d4d8       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:94f0c448171b974aab7b4a96d00feb5799b1d69827a738a4f8b4b30c17fb74e7                                 6 minutes ago       Running             gcp-auth                                 0                   14b4e2db2cb2b       gcp-auth-78565c9fb4-7pfmd
	2514173d96a26       registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5                           6 minutes ago       Running             hostpath                                 0                   02a7d350b8353       csi-hostpathplugin-8279f
	9b5cb54a94a47       registry.k8s.io/ingress-nginx/controller@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef                             6 minutes ago       Running             controller                               0                   8b83af6a32772       ingress-nginx-controller-9cc49f96f-h99dj
	ef4f6e22ce31a       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:f1c25991bac2fbb7f5fcf91ed9438df31e30edee6bed5a780464238aa09ad24c                6 minutes ago       Running             node-driver-registrar                    0                   02a7d350b8353       csi-hostpathplugin-8279f
	5810f70edf860       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:80b9ba94aa2afe24553d69bd165a6a51552d1582d68618ec00d3b804a7d9193c   6 minutes ago       Running             csi-external-health-monitor-controller   0                   02a7d350b8353       csi-hostpathplugin-8279f
	51f0c139f4f77       registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7                              6 minutes ago       Running             csi-resizer                              0                   9e3b6780764f8       csi-hostpath-resizer-0
	e02a58717cc7c       registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b                             6 minutes ago       Running             csi-attacher                             0                   00ac4103d1658       csi-hostpath-attacher-0
	e805d753e363a       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      6 minutes ago       Running             volume-snapshot-controller               0                   5ef4f58a4b6da       snapshot-controller-7d9fbc56b8-pw4g9
	868179ee6252a       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      6 minutes ago       Running             volume-snapshot-controller               0                   34844f808604d       snapshot-controller-7d9fbc56b8-wvh2l
	30d73d85a386c       8c217da6734db                                                                                                                                6 minutes ago       Exited              patch                                    1                   63ec050554699       ingress-nginx-admission-patch-tp6tp
	4182ff3d1e473       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24                   6 minutes ago       Exited              create                                   0                   f519da4bfec27       ingress-nginx-admission-create-s6nvq
	220ba84adaccb       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:66fdf18cc8a577423b2a36b96a5be40fe690fdb986bfe7875f54edfa9c7d19a5                            6 minutes ago       Running             gadget                                   0                   95e2903b29637       gadget-xvvvf
	31302c4317135       rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246                                       7 minutes ago       Running             local-path-provisioner                   0                   621898582dfa1       local-path-provisioner-648f6765c9-fq5l2
	efbb7cee1304e       registry.k8s.io/metrics-server/metrics-server@sha256:89258156d0e9af60403eafd44da9676fd66f600c7934d468ccc17e42b199aee2                        7 minutes ago       Running             metrics-server                           0                   a6da2813101a6       metrics-server-85b7d694d7-zlrv7
	e7580cc057c84       gcr.io/k8s-minikube/kube-registry-proxy@sha256:f832bbe1d48c62de040bd793937eaa0c05d2f945a55376a99c80a4dd9961aeb1                              7 minutes ago       Running             registry-proxy                           0                   9a881a5471f2a       registry-proxy-grb7m
	27e640b6d6395       registry@sha256:3725021071ec9383eb3d87ddbdff9ed602439b3f7c958c9c2fb941049ea6531d                                                             7 minutes ago       Running             registry                                 0                   962de4a995d2e       registry-66898fdd98-d7zx7
	48adb1b2452be       kicbase/minikube-ingress-dns@sha256:d7c3fd25a0ea8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89                                         7 minutes ago       Running             minikube-ingress-dns                     0                   3ce8cc04a57f5       kube-ingress-dns-minikube
	e49c7022a687d       gcr.io/cloud-spanner-emulator/emulator@sha256:15030dbba87c4fba50265cc80e62278eb41925d24d3a54c30563eff06304bf58                               7 minutes ago       Running             cloud-spanner-emulator                   0                   6c19c08a0c4b0       cloud-spanner-emulator-85f6b7fc65-vpv4f
	5f92d762e43b0       nvcr.io/nvidia/k8s-device-plugin@sha256:630596340f8e83aa10b0bc13a46db76772e31b7dccfc34d3a4e41ab7e0aa6117                                     7 minutes ago       Running             nvidia-device-plugin-ctr                 0                   32059c64edb96       nvidia-device-plugin-daemonset-x7b8m
	388ea771a1c89       6e38f40d628db                                                                                                                                7 minutes ago       Running             storage-provisioner                      0                   a451536f2a3ae       storage-provisioner
	ef7f4d809a410       rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                                               7 minutes ago       Running             amd-gpu-device-plugin                    0                   efbec0257280a       amd-gpu-device-plugin-7jx7f
	5629c377b6053       52546a367cc9e                                                                                                                                8 minutes ago       Running             coredns                                  0                   b6c342cfbd0e9       coredns-66bc5c9577-fkh52
	cf32cea215063       df0860106674d                                                                                                                                8 minutes ago       Running             kube-proxy                               0                   164bb1f35fdbf       kube-proxy-d9fnb
	1b712309a5901       46169d968e920                                                                                                                                8 minutes ago       Running             kube-scheduler                           0                   16368e958b541       kube-scheduler-addons-214441
	5df8c088591fb       5f1f5298c888d                                                                                                                                8 minutes ago       Running             etcd                                     0                   0a4ad14786721       etcd-addons-214441
	b5368f01fa760       90550c43ad2bc                                                                                                                                8 minutes ago       Running             kube-apiserver                           0                   47b3b468b3308       kube-apiserver-addons-214441
	b7a56dc83eb1d       a0af72f2ec6d6                                                                                                                                8 minutes ago       Running             kube-controller-manager                  0                   8a7efdf44079d       kube-controller-manager-addons-214441
	
	
	==> controller_ingress [9b5cb54a94a4] <==
	  Repository:    https://github.com/kubernetes/ingress-nginx
	  nginx version: nginx/1.27.1
	
	-------------------------------------------------------------------------------
	
	W0929 11:32:43.351074       7 client_config.go:667] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
	I0929 11:32:43.351415       7 main.go:205] "Creating API client" host="https://10.96.0.1:443"
	I0929 11:32:43.358945       7 main.go:248] "Running in Kubernetes cluster" major="1" minor="34" git="v1.34.0" state="clean" commit="f28b4c9efbca5c5c0af716d9f2d5702667ee8a45" platform="linux/amd64"
	I0929 11:32:43.617595       7 main.go:101] "SSL fake certificate created" file="/etc/ingress-controller/ssl/default-fake-certificate.pem"
	I0929 11:32:43.669895       7 ssl.go:535] "loading tls certificate" path="/usr/local/certificates/cert" key="/usr/local/certificates/key"
	I0929 11:32:43.710566       7 nginx.go:273] "Starting NGINX Ingress controller"
	I0929 11:32:43.774831       7 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"3dd86085-2445-46aa-9793-08d97be67a7a", APIVersion:"v1", ResourceVersion:"698", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/ingress-nginx-controller
	I0929 11:32:43.777172       7 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"tcp-services", UID:"a0984485-7d74-4106-ac71-518c0a876149", APIVersion:"v1", ResourceVersion:"700", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/tcp-services
	I0929 11:32:43.778120       7 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"udp-services", UID:"5ee96fc9-531b-4e0a-b522-5b42be6fefd3", APIVersion:"v1", ResourceVersion:"706", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/udp-services
	I0929 11:32:44.911735       7 nginx.go:319] "Starting NGINX process"
	I0929 11:32:44.915250       7 leaderelection.go:257] attempting to acquire leader lease ingress-nginx/ingress-nginx-leader...
	I0929 11:32:44.919628       7 nginx.go:339] "Starting validation webhook" address=":8443" certPath="/usr/local/certificates/cert" keyPath="/usr/local/certificates/key"
	I0929 11:32:44.922768       7 controller.go:214] "Configuration changes detected, backend reload required"
	I0929 11:32:44.952222       7 leaderelection.go:271] successfully acquired lease ingress-nginx/ingress-nginx-leader
	I0929 11:32:44.953465       7 status.go:85] "New leader elected" identity="ingress-nginx-controller-9cc49f96f-h99dj"
	I0929 11:32:44.982815       7 status.go:224] "POD is not ready" pod="ingress-nginx/ingress-nginx-controller-9cc49f96f-h99dj" node="addons-214441"
	I0929 11:32:45.020999       7 controller.go:228] "Backend successfully reloaded"
	I0929 11:32:45.021197       7 controller.go:240] "Initial sync, sleeping for 1 second"
	I0929 11:32:45.021384       7 event.go:377] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-9cc49f96f-h99dj", UID:"2bde8bfa-47f0-48da-9e63-cd8e2a0a38c6", APIVersion:"v1", ResourceVersion:"784", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	I0929 11:32:45.037639       7 status.go:224] "POD is not ready" pod="ingress-nginx/ingress-nginx-controller-9cc49f96f-h99dj" node="addons-214441"
	
	
	==> coredns [5629c377b605] <==
	[INFO] Reloading complete
	[INFO] 10.244.0.7:52212 - 37212 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000406223s
	[INFO] 10.244.0.7:52212 - 14403 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.001145753s
	[INFO] 10.244.0.7:52212 - 34526 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.001027976s
	[INFO] 10.244.0.7:52212 - 40091 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.002958291s
	[INFO] 10.244.0.7:52212 - 8101 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000112715s
	[INFO] 10.244.0.7:52212 - 55833 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000201304s
	[INFO] 10.244.0.7:52212 - 46374 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000813986s
	[INFO] 10.244.0.7:52212 - 13461 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.00014644s
	[INFO] 10.244.0.7:58134 - 57276 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000168682s
	[INFO] 10.244.0.7:58134 - 56902 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000087725s
	[INFO] 10.244.0.7:45806 - 23713 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000124662s
	[INFO] 10.244.0.7:45806 - 23950 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000142715s
	[INFO] 10.244.0.7:42777 - 55128 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000080735s
	[INFO] 10.244.0.7:42777 - 54892 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000216294s
	[INFO] 10.244.0.7:36398 - 14124 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000321419s
	[INFO] 10.244.0.7:36398 - 13929 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000550817s
	[INFO] 10.244.0.26:41550 - 7840 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00065483s
	[INFO] 10.244.0.26:48585 - 52888 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000202217s
	[INFO] 10.244.0.26:53114 - 55168 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000190191s
	[INFO] 10.244.0.26:47096 - 26187 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000662248s
	[INFO] 10.244.0.26:48999 - 38178 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00015298s
	[INFO] 10.244.0.26:58286 - 39587 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000285241s
	[INFO] 10.244.0.26:45238 - 61249 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.003642198s
	[INFO] 10.244.0.26:33573 - 52185 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 534 0.003922074s
	
	
	==> describe nodes <==
	Name:               addons-214441
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-214441
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b8c533eb3dce8338d3d4e7231ea97d1c44ed6f81
	                    minikube.k8s.io/name=addons-214441
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_29T11_31_12_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-214441
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-214441"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Sep 2025 11:31:09 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-214441
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Sep 2025 11:39:11 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Sep 2025 11:38:52 +0000   Mon, 29 Sep 2025 11:31:06 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Sep 2025 11:38:52 +0000   Mon, 29 Sep 2025 11:31:06 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Sep 2025 11:38:52 +0000   Mon, 29 Sep 2025 11:31:06 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Sep 2025 11:38:52 +0000   Mon, 29 Sep 2025 11:31:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.76
	  Hostname:    addons-214441
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008588Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008588Ki
	  pods:               110
	System Info:
	  Machine ID:                 44179717398847cdb8d861dffe58e059
	  System UUID:                44179717-3988-47cd-b8d8-61dffe58e059
	  Boot ID:                    f083535d-5807-413a-9a6b-1a0bbe2d4432
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://28.4.0
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (29 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     cloud-spanner-emulator-85f6b7fc65-vpv4f     0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m58s
	  gadget                      gadget-xvvvf                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m52s
	  gcp-auth                    gcp-auth-78565c9fb4-7pfmd                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m43s
	  ingress-nginx               ingress-nginx-controller-9cc49f96f-h99dj    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         7m51s
	  kube-system                 amd-gpu-device-plugin-7jx7f                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m59s
	  kube-system                 coredns-66bc5c9577-fkh52                    100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     8m3s
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m47s
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m46s
	  kube-system                 csi-hostpathplugin-8279f                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m47s
	  kube-system                 etcd-addons-214441                          100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         8m8s
	  kube-system                 kube-apiserver-addons-214441                250m (12%)    0 (0%)      0 (0%)           0 (0%)         8m8s
	  kube-system                 kube-controller-manager-addons-214441       200m (10%)    0 (0%)      0 (0%)           0 (0%)         8m8s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m56s
	  kube-system                 kube-proxy-d9fnb                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m3s
	  kube-system                 kube-scheduler-addons-214441                100m (5%)     0 (0%)      0 (0%)           0 (0%)         8m10s
	  kube-system                 metrics-server-85b7d694d7-zlrv7             100m (5%)     0 (0%)      200Mi (5%)       0 (0%)         7m55s
	  kube-system                 nvidia-device-plugin-daemonset-x7b8m        0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m59s
	  kube-system                 registry-66898fdd98-d7zx7                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m56s
	  kube-system                 registry-creds-764b6fb674-td8pw             0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m59s
	  kube-system                 registry-proxy-grb7m                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m56s
	  kube-system                 snapshot-controller-7d9fbc56b8-pw4g9        0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m48s
	  kube-system                 snapshot-controller-7d9fbc56b8-wvh2l        0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m48s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m55s
	  local-path-storage          local-path-provisioner-648f6765c9-fq5l2     0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m54s
	  volcano-system              volcano-admission-589c7dd587-jl4hb          0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m52s
	  volcano-system              volcano-admission-init-qkc2n                0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m52s
	  volcano-system              volcano-controllers-7dc6969b45-zsmjr        0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m49s
	  volcano-system              volcano-scheduler-799f64f894-jnn6h          0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m48s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-8b84x              0 (0%)        0 (0%)      128Mi (3%)       256Mi (6%)     7m53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             588Mi (15%)  426Mi (10%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 8m1s  kube-proxy       
	  Normal  Starting                 8m9s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  8m8s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  8m8s  kubelet          Node addons-214441 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m8s  kubelet          Node addons-214441 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m8s  kubelet          Node addons-214441 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           8m4s  node-controller  Node addons-214441 event: Registered Node addons-214441 in Controller
	  Normal  NodeReady                8m3s  kubelet          Node addons-214441 status is now: NodeReady
	
	
	==> dmesg <==
	[Sep29 11:30] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000007] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.000056] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.003018] (rpcbind)[119]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.187834] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000021] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.114045] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.120480] kauditd_printk_skb: 401 callbacks suppressed
	[Sep29 11:31] kauditd_printk_skb: 205 callbacks suppressed
	[  +0.167720] kauditd_printk_skb: 165 callbacks suppressed
	[  +0.127166] kauditd_printk_skb: 19 callbacks suppressed
	[  +0.109876] kauditd_printk_skb: 297 callbacks suppressed
	[  +0.186219] kauditd_printk_skb: 164 callbacks suppressed
	[  +0.000058] kauditd_printk_skb: 275 callbacks suppressed
	[  +1.798616] kauditd_printk_skb: 343 callbacks suppressed
	[ +13.445646] kauditd_printk_skb: 68 callbacks suppressed
	[  +5.142447] kauditd_printk_skb: 20 callbacks suppressed
	[Sep29 11:32] kauditd_printk_skb: 41 callbacks suppressed
	[  +6.199632] kauditd_printk_skb: 38 callbacks suppressed
	[  +1.030429] kauditd_printk_skb: 78 callbacks suppressed
	[  +5.195773] kauditd_printk_skb: 75 callbacks suppressed
	[  +5.274224] kauditd_printk_skb: 150 callbacks suppressed
	[  +5.780886] kauditd_printk_skb: 68 callbacks suppressed
	[  +8.295767] kauditd_printk_skb: 56 callbacks suppressed
	
	
	==> etcd [5df8c088591f] <==
	{"level":"warn","ts":"2025-09-29T11:31:46.527203Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58332","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:31:46.544347Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58354","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:31:46.567799Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58372","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:31:51.946700Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"113.498601ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-29T11:31:51.946796Z","caller":"traceutil/trace.go:172","msg":"trace[1711833416] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1038; }","duration":"113.602206ms","start":"2025-09-29T11:31:51.833177Z","end":"2025-09-29T11:31:51.946779Z","steps":["trace[1711833416] 'range keys from in-memory index tree'  (duration: 113.445823ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-29T11:32:00.547127Z","caller":"traceutil/trace.go:172","msg":"trace[2142736121] linearizableReadLoop","detail":"{readStateIndex:1083; appliedIndex:1083; }","duration":"213.104282ms","start":"2025-09-29T11:32:00.334002Z","end":"2025-09-29T11:32:00.547106Z","steps":["trace[2142736121] 'read index received'  (duration: 213.095571ms)","trace[2142736121] 'applied index is now lower than readState.Index'  (duration: 5.027µs)"],"step_count":2}
	{"level":"info","ts":"2025-09-29T11:32:00.549221Z","caller":"traceutil/trace.go:172","msg":"trace[708330809] transaction","detail":"{read_only:false; response_revision:1062; number_of_response:1; }","duration":"237.627688ms","start":"2025-09-29T11:32:00.311582Z","end":"2025-09-29T11:32:00.549210Z","steps":["trace[708330809] 'process raft request'  (duration: 235.951817ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-29T11:32:00.549159Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"214.479896ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-29T11:32:00.549416Z","caller":"traceutil/trace.go:172","msg":"trace[283960959] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1061; }","duration":"215.430561ms","start":"2025-09-29T11:32:00.333975Z","end":"2025-09-29T11:32:00.549406Z","steps":["trace[283960959] 'agreement among raft nodes before linearized reading'  (duration: 214.453965ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-29T11:32:00.549612Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"178.233017ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-29T11:32:00.549630Z","caller":"traceutil/trace.go:172","msg":"trace[1676271402] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1062; }","duration":"178.256779ms","start":"2025-09-29T11:32:00.371368Z","end":"2025-09-29T11:32:00.549625Z","steps":["trace[1676271402] 'agreement among raft nodes before linearized reading'  (duration: 178.210962ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-29T11:32:00.549775Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"133.256178ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-29T11:32:00.549795Z","caller":"traceutil/trace.go:172","msg":"trace[872905781] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1062; }","duration":"133.278789ms","start":"2025-09-29T11:32:00.416510Z","end":"2025-09-29T11:32:00.549789Z","steps":["trace[872905781] 'agreement among raft nodes before linearized reading'  (duration: 133.240765ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-29T11:32:22.619881Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"283.951682ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-29T11:32:22.619953Z","caller":"traceutil/trace.go:172","msg":"trace[256565612] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1138; }","duration":"284.054314ms","start":"2025-09-29T11:32:22.335884Z","end":"2025-09-29T11:32:22.619939Z","steps":["trace[256565612] 'range keys from in-memory index tree'  (duration: 283.898213ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-29T11:32:22.620417Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"203.038923ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-29T11:32:22.620455Z","caller":"traceutil/trace.go:172","msg":"trace[2141218366] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1138; }","duration":"203.079865ms","start":"2025-09-29T11:32:22.417365Z","end":"2025-09-29T11:32:22.620444Z","steps":["trace[2141218366] 'range keys from in-memory index tree'  (duration: 202.851561ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-29T11:32:37.446139Z","caller":"traceutil/trace.go:172","msg":"trace[1518739598] linearizableReadLoop","detail":"{readStateIndex:1281; appliedIndex:1281; }","duration":"111.376689ms","start":"2025-09-29T11:32:37.334743Z","end":"2025-09-29T11:32:37.446120Z","steps":["trace[1518739598] 'read index received'  (duration: 111.370356ms)","trace[1518739598] 'applied index is now lower than readState.Index'  (duration: 5.449µs)"],"step_count":2}
	{"level":"warn","ts":"2025-09-29T11:32:37.446365Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"111.596508ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-29T11:32:37.446409Z","caller":"traceutil/trace.go:172","msg":"trace[333303529] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1250; }","duration":"111.664223ms","start":"2025-09-29T11:32:37.334737Z","end":"2025-09-29T11:32:37.446401Z","steps":["trace[333303529] 'agreement among raft nodes before linearized reading'  (duration: 111.566754ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-29T11:32:37.447956Z","caller":"traceutil/trace.go:172","msg":"trace[1818807407] transaction","detail":"{read_only:false; response_revision:1251; number_of_response:1; }","duration":"216.083326ms","start":"2025-09-29T11:32:37.231864Z","end":"2025-09-29T11:32:37.447947Z","steps":["trace[1818807407] 'process raft request'  (duration: 214.333833ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-29T11:32:41.490882Z","caller":"traceutil/trace.go:172","msg":"trace[1943079177] linearizableReadLoop","detail":"{readStateIndex:1295; appliedIndex:1295; }","duration":"156.252408ms","start":"2025-09-29T11:32:41.334599Z","end":"2025-09-29T11:32:41.490852Z","steps":["trace[1943079177] 'read index received'  (duration: 156.245254ms)","trace[1943079177] 'applied index is now lower than readState.Index'  (duration: 4.49µs)"],"step_count":2}
	{"level":"warn","ts":"2025-09-29T11:32:41.491088Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"156.469181ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-29T11:32:41.491110Z","caller":"traceutil/trace.go:172","msg":"trace[366978766] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1264; }","duration":"156.509563ms","start":"2025-09-29T11:32:41.334595Z","end":"2025-09-29T11:32:41.491105Z","steps":["trace[366978766] 'agreement among raft nodes before linearized reading'  (duration: 156.436502ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-29T11:32:41.491567Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-09-29T11:32:41.150207Z","time spent":"341.358415ms","remote":"127.0.0.1:41482","response type":"/etcdserverpb.Lease/LeaseGrant","request count":-1,"request size":-1,"response count":-1,"response size":-1,"request content":""}
	
	
	==> gcp-auth [31549aa99d4d] <==
	2025/09/29 11:32:48 GCP Auth Webhook started!
	
	
	==> kernel <==
	 11:39:20 up 8 min,  0 users,  load average: 0.16, 0.81, 0.64
	Linux addons-214441 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Sep 18 15:48:18 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [b5368f01fa76] <==
	E0929 11:32:19.325625       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.105.25.227:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.105.25.227:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.105.25.227:443: connect: connection refused" logger="UnhandledError"
	E0929 11:32:19.486676       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.105.25.227:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.105.25.227:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.105.25.227:443: connect: connection refused" logger="UnhandledError"
	I0929 11:32:19.562307       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	E0929 11:32:19.808597       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.105.25.227:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.105.25.227:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.105.25.227:443: connect: connection refused" logger="UnhandledError"
	W0929 11:32:20.177130       1 handler_proxy.go:99] no RequestInfo found in the context
	E0929 11:32:20.177408       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0929 11:32:20.177426       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0929 11:32:20.177309       1 handler_proxy.go:99] no RequestInfo found in the context
	E0929 11:32:20.177493       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0929 11:32:20.178737       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0929 11:32:20.537717       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0929 11:33:22.473236       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:33:25.012867       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:34:24.998903       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:34:51.908792       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:35:44.426933       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:36:08.861452       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:36:59.641615       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:37:09.147049       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:38:08.905952       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:38:21.134856       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:39:16.594587       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	
	
	==> kube-controller-manager [b7a56dc83eb1] <==
	I0929 11:31:16.181350       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="addons-214441"
	I0929 11:31:16.181580       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I0929 11:31:16.181995       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I0929 11:31:16.182480       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I0929 11:31:16.185142       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I0929 11:31:16.187067       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0929 11:31:16.187217       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0929 11:31:16.187241       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0929 11:31:16.191585       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I0929 11:31:21.221543       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	E0929 11:31:25.868897       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E0929 11:31:46.149484       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 11:31:46.149905       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="podgroups.scheduling.volcano.sh"
	I0929 11:31:46.150195       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="commands.bus.volcano.sh"
	I0929 11:31:46.150226       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I0929 11:31:46.150243       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="jobs.batch.volcano.sh"
	I0929 11:31:46.150302       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="jobtemplates.flow.volcano.sh"
	I0929 11:31:46.150395       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="jobflows.flow.volcano.sh"
	I0929 11:31:46.153366       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I0929 11:31:46.200961       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I0929 11:31:46.209620       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I0929 11:31:47.554513       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0929 11:31:47.611471       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E0929 11:32:17.586210       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 11:32:17.635151       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [cf32cea21506] <==
	I0929 11:31:18.966107       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0929 11:31:19.067553       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0929 11:31:19.067585       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.76"]
	E0929 11:31:19.067663       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0929 11:31:19.367843       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I0929 11:31:19.367925       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0929 11:31:19.367957       1 server_linux.go:132] "Using iptables Proxier"
	I0929 11:31:19.410838       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0929 11:31:19.411105       1 server.go:527] "Version info" version="v1.34.0"
	I0929 11:31:19.411117       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 11:31:19.438109       1 config.go:200] "Starting service config controller"
	I0929 11:31:19.438145       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0929 11:31:19.438165       1 config.go:106] "Starting endpoint slice config controller"
	I0929 11:31:19.438169       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0929 11:31:19.438197       1 config.go:403] "Starting serviceCIDR config controller"
	I0929 11:31:19.438201       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0929 11:31:19.443612       1 config.go:309] "Starting node config controller"
	I0929 11:31:19.443644       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0929 11:31:19.443650       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0929 11:31:19.552512       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0929 11:31:19.552650       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0929 11:31:19.639397       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [1b712309a590] <==
	E0929 11:31:09.221196       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0929 11:31:09.221236       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0929 11:31:09.222033       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E0929 11:31:09.225006       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0929 11:31:09.225514       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E0929 11:31:09.225802       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0929 11:31:09.225865       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0929 11:31:09.225922       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0929 11:31:09.226012       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0929 11:31:09.226045       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0929 11:31:10.048406       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0929 11:31:10.133629       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0929 11:31:10.190360       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0929 11:31:10.277104       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0929 11:31:10.293798       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0929 11:31:10.302970       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0929 11:31:10.326331       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0929 11:31:10.346485       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E0929 11:31:10.373940       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0929 11:31:10.450205       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0929 11:31:10.476705       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0929 11:31:10.548049       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0929 11:31:10.584420       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0929 11:31:10.696768       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I0929 11:31:12.791660       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 29 11:38:13 addons-214441 kubelet[2504]: E0929 11:38:13.116169    2504 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"main\" with ErrImagePull: \"Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="volcano-system/volcano-admission-init-qkc2n" podUID="991c521e-d31a-420f-a3c6-2afa006c70ee"
	Sep 29 11:38:15 addons-214441 kubelet[2504]: E0929 11:38:15.106149    2504 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/volcanosh/vc-scheduler:v1.12.2@sha256:6e28f0f79d4cd09c1c34afaba41623c1b4d0fd7087cc99d6449a8a62e073b50e"
	Sep 29 11:38:15 addons-214441 kubelet[2504]: E0929 11:38:15.106207    2504 kuberuntime_image.go:43] "Failed to pull image" err="Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/volcanosh/vc-scheduler:v1.12.2@sha256:6e28f0f79d4cd09c1c34afaba41623c1b4d0fd7087cc99d6449a8a62e073b50e"
	Sep 29 11:38:15 addons-214441 kubelet[2504]: E0929 11:38:15.106764    2504 kuberuntime_manager.go:1449] "Unhandled Error" err="container volcano-scheduler start failed in pod volcano-scheduler-799f64f894-jnn6h_volcano-system(6f384a13-5a13-40d7-bedc-aaf02b7cc343): ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Sep 29 11:38:15 addons-214441 kubelet[2504]: E0929 11:38:15.106806    2504 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-scheduler\" with ErrImagePull: \"Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="volcano-system/volcano-scheduler-799f64f894-jnn6h" podUID="6f384a13-5a13-40d7-bedc-aaf02b7cc343"
	Sep 29 11:38:22 addons-214441 kubelet[2504]: E0929 11:38:22.047120    2504 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/volcanosh/vc-controller-manager:v1.12.2@sha256:286112e70bdbf88174a66895bb3c64dd9026b5a762025b61bcd8f6cac04e1b90\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="volcano-system/volcano-controllers-7dc6969b45-zsmjr" podUID="2a533a66-4422-41ee-beed-58fae52e36b3"
	Sep 29 11:38:24 addons-214441 kubelet[2504]: E0929 11:38:24.046188    2504 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"main\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/volcanosh/vc-webhook-manager:v1.12.2@sha256:b7c3bd73e2d9240cf17662451d50e0d73654342235a66cdfb2ec221f1628ae35\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="volcano-system/volcano-admission-init-qkc2n" podUID="991c521e-d31a-420f-a3c6-2afa006c70ee"
	Sep 29 11:38:26 addons-214441 kubelet[2504]: E0929 11:38:26.063863    2504 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"yakd\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/marcnuri/yakd:0.0.5@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="yakd-dashboard/yakd-dashboard-5ff678cb9-8b84x" podUID="776cffb2-d8ee-4337-a96e-2a5d06549491"
	Sep 29 11:38:30 addons-214441 kubelet[2504]: E0929 11:38:30.045590    2504 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-scheduler\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/volcanosh/vc-scheduler:v1.12.2@sha256:6e28f0f79d4cd09c1c34afaba41623c1b4d0fd7087cc99d6449a8a62e073b50e\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="volcano-system/volcano-scheduler-799f64f894-jnn6h" podUID="6f384a13-5a13-40d7-bedc-aaf02b7cc343"
	Sep 29 11:38:36 addons-214441 kubelet[2504]: E0929 11:38:36.045637    2504 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"main\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/volcanosh/vc-webhook-manager:v1.12.2@sha256:b7c3bd73e2d9240cf17662451d50e0d73654342235a66cdfb2ec221f1628ae35\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="volcano-system/volcano-admission-init-qkc2n" podUID="991c521e-d31a-420f-a3c6-2afa006c70ee"
	Sep 29 11:38:36 addons-214441 kubelet[2504]: E0929 11:38:36.046659    2504 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/volcanosh/vc-controller-manager:v1.12.2@sha256:286112e70bdbf88174a66895bb3c64dd9026b5a762025b61bcd8f6cac04e1b90\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="volcano-system/volcano-controllers-7dc6969b45-zsmjr" podUID="2a533a66-4422-41ee-beed-58fae52e36b3"
	Sep 29 11:38:41 addons-214441 kubelet[2504]: E0929 11:38:41.047736    2504 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"yakd\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/marcnuri/yakd:0.0.5@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="yakd-dashboard/yakd-dashboard-5ff678cb9-8b84x" podUID="776cffb2-d8ee-4337-a96e-2a5d06549491"
	Sep 29 11:38:44 addons-214441 kubelet[2504]: E0929 11:38:44.060949    2504 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-scheduler\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/volcanosh/vc-scheduler:v1.12.2@sha256:6e28f0f79d4cd09c1c34afaba41623c1b4d0fd7087cc99d6449a8a62e073b50e\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="volcano-system/volcano-scheduler-799f64f894-jnn6h" podUID="6f384a13-5a13-40d7-bedc-aaf02b7cc343"
	Sep 29 11:38:46 addons-214441 kubelet[2504]: I0929 11:38:46.045817    2504 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-7jx7f" secret="" err="secret \"gcp-auth\" not found"
	Sep 29 11:38:48 addons-214441 kubelet[2504]: E0929 11:38:48.046073    2504 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/volcanosh/vc-controller-manager:v1.12.2@sha256:286112e70bdbf88174a66895bb3c64dd9026b5a762025b61bcd8f6cac04e1b90\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="volcano-system/volcano-controllers-7dc6969b45-zsmjr" podUID="2a533a66-4422-41ee-beed-58fae52e36b3"
	Sep 29 11:38:50 addons-214441 kubelet[2504]: E0929 11:38:50.046508    2504 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"main\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/volcanosh/vc-webhook-manager:v1.12.2@sha256:b7c3bd73e2d9240cf17662451d50e0d73654342235a66cdfb2ec221f1628ae35\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="volcano-system/volcano-admission-init-qkc2n" podUID="991c521e-d31a-420f-a3c6-2afa006c70ee"
	Sep 29 11:38:53 addons-214441 kubelet[2504]: I0929 11:38:53.046018    2504 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-grb7m" secret="" err="secret \"gcp-auth\" not found"
	Sep 29 11:38:54 addons-214441 kubelet[2504]: E0929 11:38:54.050367    2504 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"yakd\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/marcnuri/yakd:0.0.5@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="yakd-dashboard/yakd-dashboard-5ff678cb9-8b84x" podUID="776cffb2-d8ee-4337-a96e-2a5d06549491"
	Sep 29 11:38:56 addons-214441 kubelet[2504]: E0929 11:38:56.045956    2504 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-scheduler\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/volcanosh/vc-scheduler:v1.12.2@sha256:6e28f0f79d4cd09c1c34afaba41623c1b4d0fd7087cc99d6449a8a62e073b50e\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="volcano-system/volcano-scheduler-799f64f894-jnn6h" podUID="6f384a13-5a13-40d7-bedc-aaf02b7cc343"
	Sep 29 11:39:02 addons-214441 kubelet[2504]: E0929 11:39:02.047122    2504 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/volcanosh/vc-controller-manager:v1.12.2@sha256:286112e70bdbf88174a66895bb3c64dd9026b5a762025b61bcd8f6cac04e1b90\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="volcano-system/volcano-controllers-7dc6969b45-zsmjr" podUID="2a533a66-4422-41ee-beed-58fae52e36b3"
	Sep 29 11:39:05 addons-214441 kubelet[2504]: E0929 11:39:05.046715    2504 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"main\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/volcanosh/vc-webhook-manager:v1.12.2@sha256:b7c3bd73e2d9240cf17662451d50e0d73654342235a66cdfb2ec221f1628ae35\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="volcano-system/volcano-admission-init-qkc2n" podUID="991c521e-d31a-420f-a3c6-2afa006c70ee"
	Sep 29 11:39:07 addons-214441 kubelet[2504]: E0929 11:39:07.049874    2504 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"yakd\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/marcnuri/yakd:0.0.5@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="yakd-dashboard/yakd-dashboard-5ff678cb9-8b84x" podUID="776cffb2-d8ee-4337-a96e-2a5d06549491"
	Sep 29 11:39:09 addons-214441 kubelet[2504]: E0929 11:39:09.045980    2504 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-scheduler\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/volcanosh/vc-scheduler:v1.12.2@sha256:6e28f0f79d4cd09c1c34afaba41623c1b4d0fd7087cc99d6449a8a62e073b50e\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="volcano-system/volcano-scheduler-799f64f894-jnn6h" podUID="6f384a13-5a13-40d7-bedc-aaf02b7cc343"
	Sep 29 11:39:17 addons-214441 kubelet[2504]: E0929 11:39:17.046400    2504 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/volcanosh/vc-controller-manager:v1.12.2@sha256:286112e70bdbf88174a66895bb3c64dd9026b5a762025b61bcd8f6cac04e1b90\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="volcano-system/volcano-controllers-7dc6969b45-zsmjr" podUID="2a533a66-4422-41ee-beed-58fae52e36b3"
	Sep 29 11:39:19 addons-214441 kubelet[2504]: E0929 11:39:19.045426    2504 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"main\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/volcanosh/vc-webhook-manager:v1.12.2@sha256:b7c3bd73e2d9240cf17662451d50e0d73654342235a66cdfb2ec221f1628ae35\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="volcano-system/volcano-admission-init-qkc2n" podUID="991c521e-d31a-420f-a3c6-2afa006c70ee"
	
	
	==> storage-provisioner [388ea771a1c8] <==
	W0929 11:38:56.008522       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:38:58.013332       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:38:58.022578       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:39:00.026644       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:39:00.033229       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:39:02.037138       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:39:02.044109       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:39:04.048842       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:39:04.061582       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:39:06.067336       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:39:06.073677       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:39:08.077036       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:39:08.083409       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:39:10.086703       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:39:10.094962       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:39:12.099954       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:39:12.106194       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:39:14.111374       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:39:14.121649       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:39:16.126470       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:39:16.133793       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:39:18.138635       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:39:18.144646       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:39:20.150384       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:39:20.160520       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-214441 -n addons-214441
helpers_test.go:269: (dbg) Run:  kubectl --context addons-214441 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-s6nvq ingress-nginx-admission-patch-tp6tp registry-creds-764b6fb674-td8pw volcano-admission-589c7dd587-jl4hb volcano-admission-init-qkc2n volcano-controllers-7dc6969b45-zsmjr volcano-scheduler-799f64f894-jnn6h yakd-dashboard-5ff678cb9-8b84x
helpers_test.go:282: ======> post-mortem[TestAddons/serial/Volcano]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-214441 describe pod ingress-nginx-admission-create-s6nvq ingress-nginx-admission-patch-tp6tp registry-creds-764b6fb674-td8pw volcano-admission-589c7dd587-jl4hb volcano-admission-init-qkc2n volcano-controllers-7dc6969b45-zsmjr volcano-scheduler-799f64f894-jnn6h yakd-dashboard-5ff678cb9-8b84x
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-214441 describe pod ingress-nginx-admission-create-s6nvq ingress-nginx-admission-patch-tp6tp registry-creds-764b6fb674-td8pw volcano-admission-589c7dd587-jl4hb volcano-admission-init-qkc2n volcano-controllers-7dc6969b45-zsmjr volcano-scheduler-799f64f894-jnn6h yakd-dashboard-5ff678cb9-8b84x: exit status 1 (80.774318ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-s6nvq" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-tp6tp" not found
	Error from server (NotFound): pods "registry-creds-764b6fb674-td8pw" not found
	Error from server (NotFound): pods "volcano-admission-589c7dd587-jl4hb" not found
	Error from server (NotFound): pods "volcano-admission-init-qkc2n" not found
	Error from server (NotFound): pods "volcano-controllers-7dc6969b45-zsmjr" not found
	Error from server (NotFound): pods "volcano-scheduler-799f64f894-jnn6h" not found
	Error from server (NotFound): pods "yakd-dashboard-5ff678cb9-8b84x" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-214441 describe pod ingress-nginx-admission-create-s6nvq ingress-nginx-admission-patch-tp6tp registry-creds-764b6fb674-td8pw volcano-admission-589c7dd587-jl4hb volcano-admission-init-qkc2n volcano-controllers-7dc6969b45-zsmjr volcano-scheduler-799f64f894-jnn6h yakd-dashboard-5ff678cb9-8b84x: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-214441 addons disable volcano --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-214441 addons disable volcano --alsologtostderr -v=1: (11.484435348s)
--- FAIL: TestAddons/serial/Volcano (374.11s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (492.04s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-214441 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-214441 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-214441 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [182f1b86-e027-4d79-a5a9-272a05688c3b] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
addons_test.go:252: ***** TestAddons/parallel/Ingress: pod "run=nginx" failed to start within 8m0s: context deadline exceeded ****
addons_test.go:252: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-214441 -n addons-214441
addons_test.go:252: TestAddons/parallel/Ingress: showing logs for failed pods as of 2025-09-29 11:47:51.935045569 +0000 UTC m=+1058.089779256
addons_test.go:252: (dbg) Run:  kubectl --context addons-214441 describe po nginx -n default
addons_test.go:252: (dbg) kubectl --context addons-214441 describe po nginx -n default:
Name:             nginx
Namespace:        default
Priority:         0
Service Account:  default
Node:             addons-214441/192.168.39.76
Start Time:       Mon, 29 Sep 2025 11:39:51 +0000
Labels:           run=nginx
Annotations:      <none>
Status:           Pending
IP:               10.244.0.28
IPs:
IP:  10.244.0.28
Containers:
nginx:
Container ID:   
Image:          docker.io/nginx:alpine
Image ID:       
Port:           80/TCP
Host Port:      0/TCP
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-rdmgz (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-rdmgz:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  8m1s                    default-scheduler  Successfully assigned default/nginx to addons-214441
Warning  Failed     8m                      kubelet            Failed to pull image "docker.io/nginx:alpine": toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   Pulling    5m7s (x5 over 8m)       kubelet            Pulling image "docker.io/nginx:alpine"
Warning  Failed     5m7s (x5 over 8m)       kubelet            Error: ErrImagePull
Warning  Failed     5m7s (x4 over 7m46s)    kubelet            Failed to pull image "docker.io/nginx:alpine": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   BackOff    2m59s (x21 over 7m59s)  kubelet            Back-off pulling image "docker.io/nginx:alpine"
Warning  Failed     2m59s (x21 over 7m59s)  kubelet            Error: ImagePullBackOff
addons_test.go:252: (dbg) Run:  kubectl --context addons-214441 logs nginx -n default
addons_test.go:252: (dbg) Non-zero exit: kubectl --context addons-214441 logs nginx -n default: exit status 1 (79.370618ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "nginx" in pod "nginx" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
addons_test.go:252: kubectl --context addons-214441 logs nginx -n default: exit status 1
addons_test.go:253: failed waiting for nginx pod: run=nginx within 8m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-214441 -n addons-214441
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-214441 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-214441 logs -n 25: (1.090161368s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                     │ minikube             │ jenkins │ v1.37.0 │ 29 Sep 25 11:30 UTC │ 29 Sep 25 11:30 UTC │
	│ delete  │ -p download-only-221115                                                                                                                                                                                                                                                                                                                                                                                                                                                   │ download-only-221115 │ jenkins │ v1.37.0 │ 29 Sep 25 11:30 UTC │ 29 Sep 25 11:30 UTC │
	│ delete  │ -p download-only-383930                                                                                                                                                                                                                                                                                                                                                                                                                                                   │ download-only-383930 │ jenkins │ v1.37.0 │ 29 Sep 25 11:30 UTC │ 29 Sep 25 11:30 UTC │
	│ delete  │ -p download-only-221115                                                                                                                                                                                                                                                                                                                                                                                                                                                   │ download-only-221115 │ jenkins │ v1.37.0 │ 29 Sep 25 11:30 UTC │ 29 Sep 25 11:30 UTC │
	│ start   │ --download-only -p binary-mirror-005122 --alsologtostderr --binary-mirror http://127.0.0.1:35607 --driver=kvm2  --auto-update-drivers=false                                                                                                                                                                                                                                                                                                                               │ binary-mirror-005122 │ jenkins │ v1.37.0 │ 29 Sep 25 11:30 UTC │                     │
	│ delete  │ -p binary-mirror-005122                                                                                                                                                                                                                                                                                                                                                                                                                                                   │ binary-mirror-005122 │ jenkins │ v1.37.0 │ 29 Sep 25 11:30 UTC │ 29 Sep 25 11:30 UTC │
	│ addons  │ disable dashboard -p addons-214441                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-214441        │ jenkins │ v1.37.0 │ 29 Sep 25 11:30 UTC │                     │
	│ addons  │ enable dashboard -p addons-214441                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-214441        │ jenkins │ v1.37.0 │ 29 Sep 25 11:30 UTC │                     │
	│ start   │ -p addons-214441 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --auto-update-drivers=false --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-214441        │ jenkins │ v1.37.0 │ 29 Sep 25 11:30 UTC │ 29 Sep 25 11:33 UTC │
	│ addons  │ addons-214441 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                               │ addons-214441        │ jenkins │ v1.37.0 │ 29 Sep 25 11:39 UTC │ 29 Sep 25 11:39 UTC │
	│ addons  │ addons-214441 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-214441        │ jenkins │ v1.37.0 │ 29 Sep 25 11:39 UTC │ 29 Sep 25 11:39 UTC │
	│ addons  │ enable headlamp -p addons-214441 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                   │ addons-214441        │ jenkins │ v1.37.0 │ 29 Sep 25 11:39 UTC │ 29 Sep 25 11:39 UTC │
	│ addons  │ addons-214441 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-214441        │ jenkins │ v1.37.0 │ 29 Sep 25 11:39 UTC │ 29 Sep 25 11:39 UTC │
	│ addons  │ addons-214441 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-214441        │ jenkins │ v1.37.0 │ 29 Sep 25 11:40 UTC │ 29 Sep 25 11:40 UTC │
	│ addons  │ addons-214441 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-214441        │ jenkins │ v1.37.0 │ 29 Sep 25 11:40 UTC │ 29 Sep 25 11:40 UTC │
	│ ip      │ addons-214441 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                          │ addons-214441        │ jenkins │ v1.37.0 │ 29 Sep 25 11:40 UTC │ 29 Sep 25 11:40 UTC │
	│ addons  │ addons-214441 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-214441        │ jenkins │ v1.37.0 │ 29 Sep 25 11:40 UTC │ 29 Sep 25 11:40 UTC │
	│ addons  │ addons-214441 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-214441        │ jenkins │ v1.37.0 │ 29 Sep 25 11:40 UTC │ 29 Sep 25 11:40 UTC │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-214441                                                                                                                                                                                                                                                                                                                                                                                            │ addons-214441        │ jenkins │ v1.37.0 │ 29 Sep 25 11:40 UTC │ 29 Sep 25 11:40 UTC │
	│ addons  │ addons-214441 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-214441        │ jenkins │ v1.37.0 │ 29 Sep 25 11:40 UTC │ 29 Sep 25 11:40 UTC │
	│ addons  │ addons-214441 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-214441        │ jenkins │ v1.37.0 │ 29 Sep 25 11:42 UTC │ 29 Sep 25 11:42 UTC │
	│ addons  │ addons-214441 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-214441        │ jenkins │ v1.37.0 │ 29 Sep 25 11:42 UTC │ 29 Sep 25 11:42 UTC │
	│ addons  │ addons-214441 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                           │ addons-214441        │ jenkins │ v1.37.0 │ 29 Sep 25 11:45 UTC │ 29 Sep 25 11:45 UTC │
	│ addons  │ addons-214441 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-214441        │ jenkins │ v1.37.0 │ 29 Sep 25 11:46 UTC │ 29 Sep 25 11:46 UTC │
	│ addons  │ addons-214441 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                   │ addons-214441        │ jenkins │ v1.37.0 │ 29 Sep 25 11:46 UTC │ 29 Sep 25 11:46 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/29 11:30:26
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0929 11:30:26.464374  595895 out.go:360] Setting OutFile to fd 1 ...
	I0929 11:30:26.464481  595895 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 11:30:26.464487  595895 out.go:374] Setting ErrFile to fd 2...
	I0929 11:30:26.464493  595895 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 11:30:26.464787  595895 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21654-591397/.minikube/bin
	I0929 11:30:26.465454  595895 out.go:368] Setting JSON to false
	I0929 11:30:26.466447  595895 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":4374,"bootTime":1759141052,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0929 11:30:26.466553  595895 start.go:140] virtualization: kvm guest
	I0929 11:30:26.468688  595895 out.go:179] * [addons-214441] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0929 11:30:26.470181  595895 out.go:179]   - MINIKUBE_LOCATION=21654
	I0929 11:30:26.470220  595895 notify.go:220] Checking for updates...
	I0929 11:30:26.473145  595895 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0929 11:30:26.474634  595895 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21654-591397/kubeconfig
	I0929 11:30:26.475793  595895 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21654-591397/.minikube
	I0929 11:30:26.477353  595895 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0929 11:30:26.478534  595895 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0929 11:30:26.479959  595895 driver.go:421] Setting default libvirt URI to qemu:///system
	I0929 11:30:26.513451  595895 out.go:179] * Using the kvm2 driver based on user configuration
	I0929 11:30:26.514622  595895 start.go:304] selected driver: kvm2
	I0929 11:30:26.514644  595895 start.go:924] validating driver "kvm2" against <nil>
	I0929 11:30:26.514659  595895 start.go:935] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0929 11:30:26.515675  595895 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 11:30:26.515785  595895 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21654-591397/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0929 11:30:26.530531  595895 install.go:163] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I0929 11:30:26.530568  595895 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21654-591397/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0929 11:30:26.545187  595895 install.go:163] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I0929 11:30:26.545244  595895 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0929 11:30:26.545491  595895 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0929 11:30:26.545527  595895 cni.go:84] Creating CNI manager for ""
	I0929 11:30:26.545570  595895 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0929 11:30:26.545579  595895 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0929 11:30:26.545628  595895 start.go:348] cluster config:
	{Name:addons-214441 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-214441 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: Network
Plugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0
s}
	I0929 11:30:26.545714  595895 iso.go:125] acquiring lock: {Name:mk3bf2644aacab696b9f4d566a6d81a30d75b71a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 11:30:26.547400  595895 out.go:179] * Starting "addons-214441" primary control-plane node in "addons-214441" cluster
	I0929 11:30:26.548855  595895 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0929 11:30:26.548909  595895 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21654-591397/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4
	I0929 11:30:26.548918  595895 cache.go:58] Caching tarball of preloaded images
	I0929 11:30:26.549035  595895 preload.go:172] Found /home/jenkins/minikube-integration/21654-591397/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0929 11:30:26.549046  595895 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on docker
	I0929 11:30:26.549389  595895 profile.go:143] Saving config to /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/addons-214441/config.json ...
	I0929 11:30:26.549415  595895 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/addons-214441/config.json: {Name:mka28e9e486990f30eb3eb321797c26d13a435f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 11:30:26.549559  595895 start.go:360] acquireMachinesLock for addons-214441: {Name:mka3370f06ebed6e47b43729e748683065f344f5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0929 11:30:26.549614  595895 start.go:364] duration metric: took 40.43µs to acquireMachinesLock for "addons-214441"
	I0929 11:30:26.549633  595895 start.go:93] Provisioning new machine with config: &{Name:addons-214441 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 Clu
sterName:addons-214441 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizatio
ns:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0929 11:30:26.549681  595895 start.go:125] createHost starting for "" (driver="kvm2")
	I0929 11:30:26.551210  595895 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I0929 11:30:26.551360  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:30:26.551417  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:30:26.564991  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37291
	I0929 11:30:26.565640  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:30:26.566242  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:30:26.566262  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:30:26.566742  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:30:26.566933  595895 main.go:141] libmachine: (addons-214441) Calling .GetMachineName
	I0929 11:30:26.567150  595895 main.go:141] libmachine: (addons-214441) Calling .DriverName
	I0929 11:30:26.567316  595895 start.go:159] libmachine.API.Create for "addons-214441" (driver="kvm2")
	I0929 11:30:26.567351  595895 client.go:168] LocalClient.Create starting
	I0929 11:30:26.567402  595895 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21654-591397/.minikube/certs/ca.pem
	I0929 11:30:26.955780  595895 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21654-591397/.minikube/certs/cert.pem
	I0929 11:30:27.214636  595895 main.go:141] libmachine: Running pre-create checks...
	I0929 11:30:27.214665  595895 main.go:141] libmachine: (addons-214441) Calling .PreCreateCheck
	I0929 11:30:27.215304  595895 main.go:141] libmachine: (addons-214441) Calling .GetConfigRaw
	I0929 11:30:27.215869  595895 main.go:141] libmachine: Creating machine...
	I0929 11:30:27.215887  595895 main.go:141] libmachine: (addons-214441) Calling .Create
	I0929 11:30:27.216119  595895 main.go:141] libmachine: (addons-214441) creating domain...
	I0929 11:30:27.216141  595895 main.go:141] libmachine: (addons-214441) creating network...
	I0929 11:30:27.217698  595895 main.go:141] libmachine: (addons-214441) DBG | found existing default network
	I0929 11:30:27.217987  595895 main.go:141] libmachine: (addons-214441) DBG | <network>
	I0929 11:30:27.218041  595895 main.go:141] libmachine: (addons-214441) DBG |   <name>default</name>
	I0929 11:30:27.218077  595895 main.go:141] libmachine: (addons-214441) DBG |   <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	I0929 11:30:27.218099  595895 main.go:141] libmachine: (addons-214441) DBG |   <forward mode='nat'>
	I0929 11:30:27.218124  595895 main.go:141] libmachine: (addons-214441) DBG |     <nat>
	I0929 11:30:27.218134  595895 main.go:141] libmachine: (addons-214441) DBG |       <port start='1024' end='65535'/>
	I0929 11:30:27.218144  595895 main.go:141] libmachine: (addons-214441) DBG |     </nat>
	I0929 11:30:27.218151  595895 main.go:141] libmachine: (addons-214441) DBG |   </forward>
	I0929 11:30:27.218161  595895 main.go:141] libmachine: (addons-214441) DBG |   <bridge name='virbr0' stp='on' delay='0'/>
	I0929 11:30:27.218190  595895 main.go:141] libmachine: (addons-214441) DBG |   <mac address='52:54:00:10:a2:1d'/>
	I0929 11:30:27.218203  595895 main.go:141] libmachine: (addons-214441) DBG |   <ip address='192.168.122.1' netmask='255.255.255.0'>
	I0929 11:30:27.218212  595895 main.go:141] libmachine: (addons-214441) DBG |     <dhcp>
	I0929 11:30:27.218222  595895 main.go:141] libmachine: (addons-214441) DBG |       <range start='192.168.122.2' end='192.168.122.254'/>
	I0929 11:30:27.218232  595895 main.go:141] libmachine: (addons-214441) DBG |     </dhcp>
	I0929 11:30:27.218245  595895 main.go:141] libmachine: (addons-214441) DBG |   </ip>
	I0929 11:30:27.218256  595895 main.go:141] libmachine: (addons-214441) DBG | </network>
	I0929 11:30:27.218263  595895 main.go:141] libmachine: (addons-214441) DBG | 
	I0929 11:30:27.219018  595895 main.go:141] libmachine: (addons-214441) DBG | I0929 11:30:27.218796  595923 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000200f10}
	I0929 11:30:27.219127  595895 main.go:141] libmachine: (addons-214441) DBG | defining private network:
	I0929 11:30:27.219156  595895 main.go:141] libmachine: (addons-214441) DBG | 
	I0929 11:30:27.219168  595895 main.go:141] libmachine: (addons-214441) DBG | <network>
	I0929 11:30:27.219179  595895 main.go:141] libmachine: (addons-214441) DBG |   <name>mk-addons-214441</name>
	I0929 11:30:27.219187  595895 main.go:141] libmachine: (addons-214441) DBG |   <dns enable='no'/>
	I0929 11:30:27.219194  595895 main.go:141] libmachine: (addons-214441) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0929 11:30:27.219200  595895 main.go:141] libmachine: (addons-214441) DBG |     <dhcp>
	I0929 11:30:27.219208  595895 main.go:141] libmachine: (addons-214441) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0929 11:30:27.219214  595895 main.go:141] libmachine: (addons-214441) DBG |     </dhcp>
	I0929 11:30:27.219218  595895 main.go:141] libmachine: (addons-214441) DBG |   </ip>
	I0929 11:30:27.219224  595895 main.go:141] libmachine: (addons-214441) DBG | </network>
	I0929 11:30:27.219227  595895 main.go:141] libmachine: (addons-214441) DBG | 
	I0929 11:30:27.225021  595895 main.go:141] libmachine: (addons-214441) DBG | creating private network mk-addons-214441 192.168.39.0/24...
	I0929 11:30:27.300287  595895 main.go:141] libmachine: (addons-214441) DBG | private network mk-addons-214441 192.168.39.0/24 created
	I0929 11:30:27.300635  595895 main.go:141] libmachine: (addons-214441) DBG | <network>
	I0929 11:30:27.300651  595895 main.go:141] libmachine: (addons-214441) DBG |   <name>mk-addons-214441</name>
	I0929 11:30:27.300675  595895 main.go:141] libmachine: (addons-214441) setting up store path in /home/jenkins/minikube-integration/21654-591397/.minikube/machines/addons-214441 ...
	I0929 11:30:27.300695  595895 main.go:141] libmachine: (addons-214441) building disk image from file:///home/jenkins/minikube-integration/21654-591397/.minikube/cache/iso/amd64/minikube-v1.37.0-1758198818-20370-amd64.iso
	I0929 11:30:27.300713  595895 main.go:141] libmachine: (addons-214441) DBG |   <uuid>9d6191f7-7df6-4691-bff3-3dbacc8ac925</uuid>
	I0929 11:30:27.300719  595895 main.go:141] libmachine: (addons-214441) DBG |   <bridge name='virbr1' stp='on' delay='0'/>
	I0929 11:30:27.300726  595895 main.go:141] libmachine: (addons-214441) DBG |   <mac address='52:54:00:ff:bc:22'/>
	I0929 11:30:27.300730  595895 main.go:141] libmachine: (addons-214441) DBG |   <dns enable='no'/>
	I0929 11:30:27.300736  595895 main.go:141] libmachine: (addons-214441) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0929 11:30:27.300741  595895 main.go:141] libmachine: (addons-214441) DBG |     <dhcp>
	I0929 11:30:27.300747  595895 main.go:141] libmachine: (addons-214441) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0929 11:30:27.300754  595895 main.go:141] libmachine: (addons-214441) DBG |     </dhcp>
	I0929 11:30:27.300758  595895 main.go:141] libmachine: (addons-214441) DBG |   </ip>
	I0929 11:30:27.300763  595895 main.go:141] libmachine: (addons-214441) DBG | </network>
	I0929 11:30:27.300770  595895 main.go:141] libmachine: (addons-214441) DBG | 
	I0929 11:30:27.300780  595895 main.go:141] libmachine: (addons-214441) DBG | I0929 11:30:27.300615  595923 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/21654-591397/.minikube
	I0929 11:30:27.300970  595895 main.go:141] libmachine: (addons-214441) Downloading /home/jenkins/minikube-integration/21654-591397/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/21654-591397/.minikube/cache/iso/amd64/minikube-v1.37.0-1758198818-20370-amd64.iso...
	I0929 11:30:27.567829  595895 main.go:141] libmachine: (addons-214441) DBG | I0929 11:30:27.567633  595923 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/21654-591397/.minikube/machines/addons-214441/id_rsa...
	I0929 11:30:27.812384  595895 main.go:141] libmachine: (addons-214441) DBG | I0929 11:30:27.812174  595923 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/21654-591397/.minikube/machines/addons-214441/addons-214441.rawdisk...
	I0929 11:30:27.812428  595895 main.go:141] libmachine: (addons-214441) DBG | Writing magic tar header
	I0929 11:30:27.812454  595895 main.go:141] libmachine: (addons-214441) DBG | Writing SSH key tar header
	I0929 11:30:27.812465  595895 main.go:141] libmachine: (addons-214441) DBG | I0929 11:30:27.812330  595923 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/21654-591397/.minikube/machines/addons-214441 ...
	I0929 11:30:27.812480  595895 main.go:141] libmachine: (addons-214441) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21654-591397/.minikube/machines/addons-214441
	I0929 11:30:27.812548  595895 main.go:141] libmachine: (addons-214441) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21654-591397/.minikube/machines
	I0929 11:30:27.812584  595895 main.go:141] libmachine: (addons-214441) setting executable bit set on /home/jenkins/minikube-integration/21654-591397/.minikube/machines/addons-214441 (perms=drwx------)
	I0929 11:30:27.812594  595895 main.go:141] libmachine: (addons-214441) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21654-591397/.minikube
	I0929 11:30:27.812609  595895 main.go:141] libmachine: (addons-214441) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21654-591397
	I0929 11:30:27.812617  595895 main.go:141] libmachine: (addons-214441) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0929 11:30:27.812625  595895 main.go:141] libmachine: (addons-214441) DBG | checking permissions on dir: /home/jenkins
	I0929 11:30:27.812632  595895 main.go:141] libmachine: (addons-214441) DBG | checking permissions on dir: /home
	I0929 11:30:27.812642  595895 main.go:141] libmachine: (addons-214441) DBG | skipping /home - not owner
	I0929 11:30:27.812734  595895 main.go:141] libmachine: (addons-214441) setting executable bit set on /home/jenkins/minikube-integration/21654-591397/.minikube/machines (perms=drwxr-xr-x)
	I0929 11:30:27.812784  595895 main.go:141] libmachine: (addons-214441) setting executable bit set on /home/jenkins/minikube-integration/21654-591397/.minikube (perms=drwxr-xr-x)
	I0929 11:30:27.812829  595895 main.go:141] libmachine: (addons-214441) setting executable bit set on /home/jenkins/minikube-integration/21654-591397 (perms=drwxrwxr-x)
	I0929 11:30:27.812851  595895 main.go:141] libmachine: (addons-214441) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0929 11:30:27.812866  595895 main.go:141] libmachine: (addons-214441) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0929 11:30:27.812895  595895 main.go:141] libmachine: (addons-214441) defining domain...
	I0929 11:30:27.814169  595895 main.go:141] libmachine: (addons-214441) defining domain using XML: 
	I0929 11:30:27.814189  595895 main.go:141] libmachine: (addons-214441) <domain type='kvm'>
	I0929 11:30:27.814197  595895 main.go:141] libmachine: (addons-214441)   <name>addons-214441</name>
	I0929 11:30:27.814204  595895 main.go:141] libmachine: (addons-214441)   <memory unit='MiB'>4096</memory>
	I0929 11:30:27.814211  595895 main.go:141] libmachine: (addons-214441)   <vcpu>2</vcpu>
	I0929 11:30:27.814217  595895 main.go:141] libmachine: (addons-214441)   <features>
	I0929 11:30:27.814224  595895 main.go:141] libmachine: (addons-214441)     <acpi/>
	I0929 11:30:27.814236  595895 main.go:141] libmachine: (addons-214441)     <apic/>
	I0929 11:30:27.814260  595895 main.go:141] libmachine: (addons-214441)     <pae/>
	I0929 11:30:27.814274  595895 main.go:141] libmachine: (addons-214441)   </features>
	I0929 11:30:27.814283  595895 main.go:141] libmachine: (addons-214441)   <cpu mode='host-passthrough'>
	I0929 11:30:27.814290  595895 main.go:141] libmachine: (addons-214441)   </cpu>
	I0929 11:30:27.814300  595895 main.go:141] libmachine: (addons-214441)   <os>
	I0929 11:30:27.814310  595895 main.go:141] libmachine: (addons-214441)     <type>hvm</type>
	I0929 11:30:27.814319  595895 main.go:141] libmachine: (addons-214441)     <boot dev='cdrom'/>
	I0929 11:30:27.814323  595895 main.go:141] libmachine: (addons-214441)     <boot dev='hd'/>
	I0929 11:30:27.814331  595895 main.go:141] libmachine: (addons-214441)     <bootmenu enable='no'/>
	I0929 11:30:27.814337  595895 main.go:141] libmachine: (addons-214441)   </os>
	I0929 11:30:27.814342  595895 main.go:141] libmachine: (addons-214441)   <devices>
	I0929 11:30:27.814352  595895 main.go:141] libmachine: (addons-214441)     <disk type='file' device='cdrom'>
	I0929 11:30:27.814381  595895 main.go:141] libmachine: (addons-214441)       <source file='/home/jenkins/minikube-integration/21654-591397/.minikube/machines/addons-214441/boot2docker.iso'/>
	I0929 11:30:27.814393  595895 main.go:141] libmachine: (addons-214441)       <target dev='hdc' bus='scsi'/>
	I0929 11:30:27.814438  595895 main.go:141] libmachine: (addons-214441)       <readonly/>
	I0929 11:30:27.814469  595895 main.go:141] libmachine: (addons-214441)     </disk>
	I0929 11:30:27.814485  595895 main.go:141] libmachine: (addons-214441)     <disk type='file' device='disk'>
	I0929 11:30:27.814501  595895 main.go:141] libmachine: (addons-214441)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0929 11:30:27.814519  595895 main.go:141] libmachine: (addons-214441)       <source file='/home/jenkins/minikube-integration/21654-591397/.minikube/machines/addons-214441/addons-214441.rawdisk'/>
	I0929 11:30:27.814537  595895 main.go:141] libmachine: (addons-214441)       <target dev='hda' bus='virtio'/>
	I0929 11:30:27.814551  595895 main.go:141] libmachine: (addons-214441)     </disk>
	I0929 11:30:27.814564  595895 main.go:141] libmachine: (addons-214441)     <interface type='network'>
	I0929 11:30:27.814577  595895 main.go:141] libmachine: (addons-214441)       <source network='mk-addons-214441'/>
	I0929 11:30:27.814587  595895 main.go:141] libmachine: (addons-214441)       <model type='virtio'/>
	I0929 11:30:27.814598  595895 main.go:141] libmachine: (addons-214441)     </interface>
	I0929 11:30:27.814608  595895 main.go:141] libmachine: (addons-214441)     <interface type='network'>
	I0929 11:30:27.814616  595895 main.go:141] libmachine: (addons-214441)       <source network='default'/>
	I0929 11:30:27.814644  595895 main.go:141] libmachine: (addons-214441)       <model type='virtio'/>
	I0929 11:30:27.814658  595895 main.go:141] libmachine: (addons-214441)     </interface>
	I0929 11:30:27.814670  595895 main.go:141] libmachine: (addons-214441)     <serial type='pty'>
	I0929 11:30:27.814681  595895 main.go:141] libmachine: (addons-214441)       <target port='0'/>
	I0929 11:30:27.814692  595895 main.go:141] libmachine: (addons-214441)     </serial>
	I0929 11:30:27.814707  595895 main.go:141] libmachine: (addons-214441)     <console type='pty'>
	I0929 11:30:27.814717  595895 main.go:141] libmachine: (addons-214441)       <target type='serial' port='0'/>
	I0929 11:30:27.814725  595895 main.go:141] libmachine: (addons-214441)     </console>
	I0929 11:30:27.814732  595895 main.go:141] libmachine: (addons-214441)     <rng model='virtio'>
	I0929 11:30:27.814741  595895 main.go:141] libmachine: (addons-214441)       <backend model='random'>/dev/random</backend>
	I0929 11:30:27.814750  595895 main.go:141] libmachine: (addons-214441)     </rng>
	I0929 11:30:27.814759  595895 main.go:141] libmachine: (addons-214441)   </devices>
	I0929 11:30:27.814768  595895 main.go:141] libmachine: (addons-214441) </domain>
	I0929 11:30:27.814781  595895 main.go:141] libmachine: (addons-214441) 
	I0929 11:30:27.822484  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:b8:70:d1 in network default
	I0929 11:30:27.823310  595895 main.go:141] libmachine: (addons-214441) starting domain...
	I0929 11:30:27.823336  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:27.823353  595895 main.go:141] libmachine: (addons-214441) ensuring networks are active...
	I0929 11:30:27.824165  595895 main.go:141] libmachine: (addons-214441) Ensuring network default is active
	I0929 11:30:27.824600  595895 main.go:141] libmachine: (addons-214441) Ensuring network mk-addons-214441 is active
	I0929 11:30:27.825327  595895 main.go:141] libmachine: (addons-214441) getting domain XML...
	I0929 11:30:27.826485  595895 main.go:141] libmachine: (addons-214441) DBG | starting domain XML:
	I0929 11:30:27.826497  595895 main.go:141] libmachine: (addons-214441) DBG | <domain type='kvm'>
	I0929 11:30:27.826534  595895 main.go:141] libmachine: (addons-214441) DBG |   <name>addons-214441</name>
	I0929 11:30:27.826556  595895 main.go:141] libmachine: (addons-214441) DBG |   <uuid>44179717-3988-47cd-b8d8-61dffe58e059</uuid>
	I0929 11:30:27.826564  595895 main.go:141] libmachine: (addons-214441) DBG |   <memory unit='KiB'>4194304</memory>
	I0929 11:30:27.826573  595895 main.go:141] libmachine: (addons-214441) DBG |   <currentMemory unit='KiB'>4194304</currentMemory>
	I0929 11:30:27.826583  595895 main.go:141] libmachine: (addons-214441) DBG |   <vcpu placement='static'>2</vcpu>
	I0929 11:30:27.826594  595895 main.go:141] libmachine: (addons-214441) DBG |   <os>
	I0929 11:30:27.826603  595895 main.go:141] libmachine: (addons-214441) DBG |     <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	I0929 11:30:27.826611  595895 main.go:141] libmachine: (addons-214441) DBG |     <boot dev='cdrom'/>
	I0929 11:30:27.826619  595895 main.go:141] libmachine: (addons-214441) DBG |     <boot dev='hd'/>
	I0929 11:30:27.826627  595895 main.go:141] libmachine: (addons-214441) DBG |     <bootmenu enable='no'/>
	I0929 11:30:27.826636  595895 main.go:141] libmachine: (addons-214441) DBG |   </os>
	I0929 11:30:27.826643  595895 main.go:141] libmachine: (addons-214441) DBG |   <features>
	I0929 11:30:27.826652  595895 main.go:141] libmachine: (addons-214441) DBG |     <acpi/>
	I0929 11:30:27.826658  595895 main.go:141] libmachine: (addons-214441) DBG |     <apic/>
	I0929 11:30:27.826666  595895 main.go:141] libmachine: (addons-214441) DBG |     <pae/>
	I0929 11:30:27.826670  595895 main.go:141] libmachine: (addons-214441) DBG |   </features>
	I0929 11:30:27.826676  595895 main.go:141] libmachine: (addons-214441) DBG |   <cpu mode='host-passthrough' check='none' migratable='on'/>
	I0929 11:30:27.826680  595895 main.go:141] libmachine: (addons-214441) DBG |   <clock offset='utc'/>
	I0929 11:30:27.826712  595895 main.go:141] libmachine: (addons-214441) DBG |   <on_poweroff>destroy</on_poweroff>
	I0929 11:30:27.826730  595895 main.go:141] libmachine: (addons-214441) DBG |   <on_reboot>restart</on_reboot>
	I0929 11:30:27.826740  595895 main.go:141] libmachine: (addons-214441) DBG |   <on_crash>destroy</on_crash>
	I0929 11:30:27.826748  595895 main.go:141] libmachine: (addons-214441) DBG |   <devices>
	I0929 11:30:27.826760  595895 main.go:141] libmachine: (addons-214441) DBG |     <emulator>/usr/bin/qemu-system-x86_64</emulator>
	I0929 11:30:27.826771  595895 main.go:141] libmachine: (addons-214441) DBG |     <disk type='file' device='cdrom'>
	I0929 11:30:27.826782  595895 main.go:141] libmachine: (addons-214441) DBG |       <driver name='qemu' type='raw'/>
	I0929 11:30:27.826804  595895 main.go:141] libmachine: (addons-214441) DBG |       <source file='/home/jenkins/minikube-integration/21654-591397/.minikube/machines/addons-214441/boot2docker.iso'/>
	I0929 11:30:27.826817  595895 main.go:141] libmachine: (addons-214441) DBG |       <target dev='hdc' bus='scsi'/>
	I0929 11:30:27.826828  595895 main.go:141] libmachine: (addons-214441) DBG |       <readonly/>
	I0929 11:30:27.826842  595895 main.go:141] libmachine: (addons-214441) DBG |       <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	I0929 11:30:27.826853  595895 main.go:141] libmachine: (addons-214441) DBG |     </disk>
	I0929 11:30:27.826863  595895 main.go:141] libmachine: (addons-214441) DBG |     <disk type='file' device='disk'>
	I0929 11:30:27.826884  595895 main.go:141] libmachine: (addons-214441) DBG |       <driver name='qemu' type='raw' io='threads'/>
	I0929 11:30:27.826906  595895 main.go:141] libmachine: (addons-214441) DBG |       <source file='/home/jenkins/minikube-integration/21654-591397/.minikube/machines/addons-214441/addons-214441.rawdisk'/>
	I0929 11:30:27.826922  595895 main.go:141] libmachine: (addons-214441) DBG |       <target dev='hda' bus='virtio'/>
	I0929 11:30:27.826937  595895 main.go:141] libmachine: (addons-214441) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	I0929 11:30:27.826947  595895 main.go:141] libmachine: (addons-214441) DBG |     </disk>
	I0929 11:30:27.826959  595895 main.go:141] libmachine: (addons-214441) DBG |     <controller type='usb' index='0' model='piix3-uhci'>
	I0929 11:30:27.826972  595895 main.go:141] libmachine: (addons-214441) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	I0929 11:30:27.826984  595895 main.go:141] libmachine: (addons-214441) DBG |     </controller>
	I0929 11:30:27.827000  595895 main.go:141] libmachine: (addons-214441) DBG |     <controller type='pci' index='0' model='pci-root'/>
	I0929 11:30:27.827014  595895 main.go:141] libmachine: (addons-214441) DBG |     <controller type='scsi' index='0' model='lsilogic'>
	I0929 11:30:27.827028  595895 main.go:141] libmachine: (addons-214441) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	I0929 11:30:27.827039  595895 main.go:141] libmachine: (addons-214441) DBG |     </controller>
	I0929 11:30:27.827046  595895 main.go:141] libmachine: (addons-214441) DBG |     <interface type='network'>
	I0929 11:30:27.827053  595895 main.go:141] libmachine: (addons-214441) DBG |       <mac address='52:54:00:98:9c:d8'/>
	I0929 11:30:27.827060  595895 main.go:141] libmachine: (addons-214441) DBG |       <source network='mk-addons-214441'/>
	I0929 11:30:27.827087  595895 main.go:141] libmachine: (addons-214441) DBG |       <model type='virtio'/>
	I0929 11:30:27.827120  595895 main.go:141] libmachine: (addons-214441) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	I0929 11:30:27.827133  595895 main.go:141] libmachine: (addons-214441) DBG |     </interface>
	I0929 11:30:27.827141  595895 main.go:141] libmachine: (addons-214441) DBG |     <interface type='network'>
	I0929 11:30:27.827146  595895 main.go:141] libmachine: (addons-214441) DBG |       <mac address='52:54:00:b8:70:d1'/>
	I0929 11:30:27.827154  595895 main.go:141] libmachine: (addons-214441) DBG |       <source network='default'/>
	I0929 11:30:27.827172  595895 main.go:141] libmachine: (addons-214441) DBG |       <model type='virtio'/>
	I0929 11:30:27.827197  595895 main.go:141] libmachine: (addons-214441) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	I0929 11:30:27.827208  595895 main.go:141] libmachine: (addons-214441) DBG |     </interface>
	I0929 11:30:27.827218  595895 main.go:141] libmachine: (addons-214441) DBG |     <serial type='pty'>
	I0929 11:30:27.827232  595895 main.go:141] libmachine: (addons-214441) DBG |       <target type='isa-serial' port='0'>
	I0929 11:30:27.827252  595895 main.go:141] libmachine: (addons-214441) DBG |         <model name='isa-serial'/>
	I0929 11:30:27.827267  595895 main.go:141] libmachine: (addons-214441) DBG |       </target>
	I0929 11:30:27.827295  595895 main.go:141] libmachine: (addons-214441) DBG |     </serial>
	I0929 11:30:27.827306  595895 main.go:141] libmachine: (addons-214441) DBG |     <console type='pty'>
	I0929 11:30:27.827316  595895 main.go:141] libmachine: (addons-214441) DBG |       <target type='serial' port='0'/>
	I0929 11:30:27.827327  595895 main.go:141] libmachine: (addons-214441) DBG |     </console>
	I0929 11:30:27.827337  595895 main.go:141] libmachine: (addons-214441) DBG |     <input type='mouse' bus='ps2'/>
	I0929 11:30:27.827353  595895 main.go:141] libmachine: (addons-214441) DBG |     <input type='keyboard' bus='ps2'/>
	I0929 11:30:27.827365  595895 main.go:141] libmachine: (addons-214441) DBG |     <audio id='1' type='none'/>
	I0929 11:30:27.827381  595895 main.go:141] libmachine: (addons-214441) DBG |     <memballoon model='virtio'>
	I0929 11:30:27.827396  595895 main.go:141] libmachine: (addons-214441) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	I0929 11:30:27.827407  595895 main.go:141] libmachine: (addons-214441) DBG |     </memballoon>
	I0929 11:30:27.827416  595895 main.go:141] libmachine: (addons-214441) DBG |     <rng model='virtio'>
	I0929 11:30:27.827462  595895 main.go:141] libmachine: (addons-214441) DBG |       <backend model='random'>/dev/random</backend>
	I0929 11:30:27.827477  595895 main.go:141] libmachine: (addons-214441) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	I0929 11:30:27.827484  595895 main.go:141] libmachine: (addons-214441) DBG |     </rng>
	I0929 11:30:27.827492  595895 main.go:141] libmachine: (addons-214441) DBG |   </devices>
	I0929 11:30:27.827507  595895 main.go:141] libmachine: (addons-214441) DBG | </domain>
	I0929 11:30:27.827523  595895 main.go:141] libmachine: (addons-214441) DBG | 
	I0929 11:30:29.153785  595895 main.go:141] libmachine: (addons-214441) waiting for domain to start...
	I0929 11:30:29.155338  595895 main.go:141] libmachine: (addons-214441) domain is now running
	I0929 11:30:29.155366  595895 main.go:141] libmachine: (addons-214441) waiting for IP...
	I0929 11:30:29.156233  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:29.156741  595895 main.go:141] libmachine: (addons-214441) DBG | no network interface addresses found for domain addons-214441 (source=lease)
	I0929 11:30:29.156768  595895 main.go:141] libmachine: (addons-214441) DBG | trying to list again with source=arp
	I0929 11:30:29.157097  595895 main.go:141] libmachine: (addons-214441) DBG | unable to find current IP address of domain addons-214441 in network mk-addons-214441 (interfaces detected: [])
	I0929 11:30:29.157173  595895 main.go:141] libmachine: (addons-214441) DBG | I0929 11:30:29.157084  595923 retry.go:31] will retry after 193.130078ms: waiting for domain to come up
	I0929 11:30:29.351641  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:29.352088  595895 main.go:141] libmachine: (addons-214441) DBG | no network interface addresses found for domain addons-214441 (source=lease)
	I0929 11:30:29.352131  595895 main.go:141] libmachine: (addons-214441) DBG | trying to list again with source=arp
	I0929 11:30:29.352401  595895 main.go:141] libmachine: (addons-214441) DBG | unable to find current IP address of domain addons-214441 in network mk-addons-214441 (interfaces detected: [])
	I0929 11:30:29.352453  595895 main.go:141] libmachine: (addons-214441) DBG | I0929 11:30:29.352389  595923 retry.go:31] will retry after 298.936458ms: waiting for domain to come up
	I0929 11:30:29.653209  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:29.653776  595895 main.go:141] libmachine: (addons-214441) DBG | no network interface addresses found for domain addons-214441 (source=lease)
	I0929 11:30:29.653812  595895 main.go:141] libmachine: (addons-214441) DBG | trying to list again with source=arp
	I0929 11:30:29.654092  595895 main.go:141] libmachine: (addons-214441) DBG | unable to find current IP address of domain addons-214441 in network mk-addons-214441 (interfaces detected: [])
	I0929 11:30:29.654145  595895 main.go:141] libmachine: (addons-214441) DBG | I0929 11:30:29.654057  595923 retry.go:31] will retry after 319.170448ms: waiting for domain to come up
	I0929 11:30:29.974953  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:29.975656  595895 main.go:141] libmachine: (addons-214441) DBG | no network interface addresses found for domain addons-214441 (source=lease)
	I0929 11:30:29.975697  595895 main.go:141] libmachine: (addons-214441) DBG | trying to list again with source=arp
	I0929 11:30:29.976026  595895 main.go:141] libmachine: (addons-214441) DBG | unable to find current IP address of domain addons-214441 in network mk-addons-214441 (interfaces detected: [])
	I0929 11:30:29.976053  595895 main.go:141] libmachine: (addons-214441) DBG | I0929 11:30:29.976008  595923 retry.go:31] will retry after 599.248845ms: waiting for domain to come up
	I0929 11:30:30.576933  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:30.577607  595895 main.go:141] libmachine: (addons-214441) DBG | no network interface addresses found for domain addons-214441 (source=lease)
	I0929 11:30:30.577638  595895 main.go:141] libmachine: (addons-214441) DBG | trying to list again with source=arp
	I0929 11:30:30.577976  595895 main.go:141] libmachine: (addons-214441) DBG | unable to find current IP address of domain addons-214441 in network mk-addons-214441 (interfaces detected: [])
	I0929 11:30:30.578001  595895 main.go:141] libmachine: (addons-214441) DBG | I0929 11:30:30.577944  595923 retry.go:31] will retry after 506.439756ms: waiting for domain to come up
	I0929 11:30:31.085911  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:31.086486  595895 main.go:141] libmachine: (addons-214441) DBG | no network interface addresses found for domain addons-214441 (source=lease)
	I0929 11:30:31.086516  595895 main.go:141] libmachine: (addons-214441) DBG | trying to list again with source=arp
	I0929 11:30:31.086838  595895 main.go:141] libmachine: (addons-214441) DBG | unable to find current IP address of domain addons-214441 in network mk-addons-214441 (interfaces detected: [])
	I0929 11:30:31.086901  595895 main.go:141] libmachine: (addons-214441) DBG | I0929 11:30:31.086827  595923 retry.go:31] will retry after 714.950089ms: waiting for domain to come up
	I0929 11:30:31.803913  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:31.804432  595895 main.go:141] libmachine: (addons-214441) DBG | no network interface addresses found for domain addons-214441 (source=lease)
	I0929 11:30:31.804465  595895 main.go:141] libmachine: (addons-214441) DBG | trying to list again with source=arp
	I0929 11:30:31.804799  595895 main.go:141] libmachine: (addons-214441) DBG | unable to find current IP address of domain addons-214441 in network mk-addons-214441 (interfaces detected: [])
	I0929 11:30:31.804835  595895 main.go:141] libmachine: (addons-214441) DBG | I0929 11:30:31.804762  595923 retry.go:31] will retry after 948.596157ms: waiting for domain to come up
	I0929 11:30:32.755226  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:32.755814  595895 main.go:141] libmachine: (addons-214441) DBG | no network interface addresses found for domain addons-214441 (source=lease)
	I0929 11:30:32.755837  595895 main.go:141] libmachine: (addons-214441) DBG | trying to list again with source=arp
	I0929 11:30:32.756159  595895 main.go:141] libmachine: (addons-214441) DBG | unable to find current IP address of domain addons-214441 in network mk-addons-214441 (interfaces detected: [])
	I0929 11:30:32.756191  595895 main.go:141] libmachine: (addons-214441) DBG | I0929 11:30:32.756135  595923 retry.go:31] will retry after 1.377051804s: waiting for domain to come up
	I0929 11:30:34.136012  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:34.136582  595895 main.go:141] libmachine: (addons-214441) DBG | no network interface addresses found for domain addons-214441 (source=lease)
	I0929 11:30:34.136605  595895 main.go:141] libmachine: (addons-214441) DBG | trying to list again with source=arp
	I0929 11:30:34.136880  595895 main.go:141] libmachine: (addons-214441) DBG | unable to find current IP address of domain addons-214441 in network mk-addons-214441 (interfaces detected: [])
	I0929 11:30:34.136912  595895 main.go:141] libmachine: (addons-214441) DBG | I0929 11:30:34.136849  595923 retry.go:31] will retry after 1.34696154s: waiting for domain to come up
	I0929 11:30:35.485739  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:35.486269  595895 main.go:141] libmachine: (addons-214441) DBG | no network interface addresses found for domain addons-214441 (source=lease)
	I0929 11:30:35.486292  595895 main.go:141] libmachine: (addons-214441) DBG | trying to list again with source=arp
	I0929 11:30:35.486548  595895 main.go:141] libmachine: (addons-214441) DBG | unable to find current IP address of domain addons-214441 in network mk-addons-214441 (interfaces detected: [])
	I0929 11:30:35.486587  595895 main.go:141] libmachine: (addons-214441) DBG | I0929 11:30:35.486521  595923 retry.go:31] will retry after 1.574508192s: waiting for domain to come up
	I0929 11:30:37.063528  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:37.064142  595895 main.go:141] libmachine: (addons-214441) DBG | no network interface addresses found for domain addons-214441 (source=lease)
	I0929 11:30:37.064170  595895 main.go:141] libmachine: (addons-214441) DBG | trying to list again with source=arp
	I0929 11:30:37.064559  595895 main.go:141] libmachine: (addons-214441) DBG | unable to find current IP address of domain addons-214441 in network mk-addons-214441 (interfaces detected: [])
	I0929 11:30:37.064594  595895 main.go:141] libmachine: (addons-214441) DBG | I0929 11:30:37.064489  595923 retry.go:31] will retry after 2.067291223s: waiting for domain to come up
	I0929 11:30:39.135405  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:39.135998  595895 main.go:141] libmachine: (addons-214441) DBG | no network interface addresses found for domain addons-214441 (source=lease)
	I0929 11:30:39.136030  595895 main.go:141] libmachine: (addons-214441) DBG | trying to list again with source=arp
	I0929 11:30:39.136354  595895 main.go:141] libmachine: (addons-214441) DBG | unable to find current IP address of domain addons-214441 in network mk-addons-214441 (interfaces detected: [])
	I0929 11:30:39.136412  595895 main.go:141] libmachine: (addons-214441) DBG | I0929 11:30:39.136338  595923 retry.go:31] will retry after 3.104602856s: waiting for domain to come up
	I0929 11:30:42.242410  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:42.242939  595895 main.go:141] libmachine: (addons-214441) DBG | no network interface addresses found for domain addons-214441 (source=lease)
	I0929 11:30:42.242965  595895 main.go:141] libmachine: (addons-214441) DBG | trying to list again with source=arp
	I0929 11:30:42.243288  595895 main.go:141] libmachine: (addons-214441) DBG | unable to find current IP address of domain addons-214441 in network mk-addons-214441 (interfaces detected: [])
	I0929 11:30:42.243344  595895 main.go:141] libmachine: (addons-214441) DBG | I0929 11:30:42.243280  595923 retry.go:31] will retry after 4.150705767s: waiting for domain to come up
	I0929 11:30:46.398779  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:46.399347  595895 main.go:141] libmachine: (addons-214441) found domain IP: 192.168.39.76
	I0929 11:30:46.399374  595895 main.go:141] libmachine: (addons-214441) reserving static IP address...
	I0929 11:30:46.399388  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has current primary IP address 192.168.39.76 and MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:46.399901  595895 main.go:141] libmachine: (addons-214441) DBG | unable to find host DHCP lease matching {name: "addons-214441", mac: "52:54:00:98:9c:d8", ip: "192.168.39.76"} in network mk-addons-214441
	I0929 11:30:46.587177  595895 main.go:141] libmachine: (addons-214441) DBG | Getting to WaitForSSH function...
	I0929 11:30:46.587215  595895 main.go:141] libmachine: (addons-214441) reserved static IP address 192.168.39.76 for domain addons-214441
	I0929 11:30:46.587228  595895 main.go:141] libmachine: (addons-214441) waiting for SSH...
	I0929 11:30:46.590179  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:46.590588  595895 main.go:141] libmachine: (addons-214441) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:9c:d8", ip: ""} in network mk-addons-214441: {Iface:virbr1 ExpiryTime:2025-09-29 12:30:43 +0000 UTC Type:0 Mac:52:54:00:98:9c:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:minikube Clientid:01:52:54:00:98:9c:d8}
	I0929 11:30:46.590626  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined IP address 192.168.39.76 and MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:46.590750  595895 main.go:141] libmachine: (addons-214441) DBG | Using SSH client type: external
	I0929 11:30:46.590791  595895 main.go:141] libmachine: (addons-214441) DBG | Using SSH private key: /home/jenkins/minikube-integration/21654-591397/.minikube/machines/addons-214441/id_rsa (-rw-------)
	I0929 11:30:46.590840  595895 main.go:141] libmachine: (addons-214441) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.76 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21654-591397/.minikube/machines/addons-214441/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0929 11:30:46.590868  595895 main.go:141] libmachine: (addons-214441) DBG | About to run SSH command:
	I0929 11:30:46.590883  595895 main.go:141] libmachine: (addons-214441) DBG | exit 0
	I0929 11:30:46.729877  595895 main.go:141] libmachine: (addons-214441) DBG | SSH cmd err, output: <nil>: 
	I0929 11:30:46.730171  595895 main.go:141] libmachine: (addons-214441) domain creation complete
	I0929 11:30:46.730534  595895 main.go:141] libmachine: (addons-214441) Calling .GetConfigRaw
	I0929 11:30:46.731196  595895 main.go:141] libmachine: (addons-214441) Calling .DriverName
	I0929 11:30:46.731410  595895 main.go:141] libmachine: (addons-214441) Calling .DriverName
	I0929 11:30:46.731600  595895 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0929 11:30:46.731623  595895 main.go:141] libmachine: (addons-214441) Calling .GetState
	I0929 11:30:46.732882  595895 main.go:141] libmachine: Detecting operating system of created instance...
	I0929 11:30:46.732897  595895 main.go:141] libmachine: Waiting for SSH to be available...
	I0929 11:30:46.732902  595895 main.go:141] libmachine: Getting to WaitForSSH function...
	I0929 11:30:46.732908  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHHostname
	I0929 11:30:46.735685  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:46.736210  595895 main.go:141] libmachine: (addons-214441) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:9c:d8", ip: ""} in network mk-addons-214441: {Iface:virbr1 ExpiryTime:2025-09-29 12:30:43 +0000 UTC Type:0 Mac:52:54:00:98:9c:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:addons-214441 Clientid:01:52:54:00:98:9c:d8}
	I0929 11:30:46.736238  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined IP address 192.168.39.76 and MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:46.736397  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHPort
	I0929 11:30:46.736652  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHKeyPath
	I0929 11:30:46.736854  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHKeyPath
	I0929 11:30:46.736998  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHUsername
	I0929 11:30:46.737156  595895 main.go:141] libmachine: Using SSH client type: native
	I0929 11:30:46.737392  595895 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.39.76 22 <nil> <nil>}
	I0929 11:30:46.737403  595895 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0929 11:30:46.844278  595895 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0929 11:30:46.844312  595895 main.go:141] libmachine: Detecting the provisioner...
	I0929 11:30:46.844324  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHHostname
	I0929 11:30:46.848224  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:46.849228  595895 main.go:141] libmachine: (addons-214441) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:9c:d8", ip: ""} in network mk-addons-214441: {Iface:virbr1 ExpiryTime:2025-09-29 12:30:43 +0000 UTC Type:0 Mac:52:54:00:98:9c:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:addons-214441 Clientid:01:52:54:00:98:9c:d8}
	I0929 11:30:46.849264  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined IP address 192.168.39.76 and MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:46.849457  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHPort
	I0929 11:30:46.849706  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHKeyPath
	I0929 11:30:46.849884  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHKeyPath
	I0929 11:30:46.850038  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHUsername
	I0929 11:30:46.850227  595895 main.go:141] libmachine: Using SSH client type: native
	I0929 11:30:46.850481  595895 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.39.76 22 <nil> <nil>}
	I0929 11:30:46.850494  595895 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0929 11:30:46.959386  595895 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2025.02-dirty
	ID=buildroot
	VERSION_ID=2025.02
	PRETTY_NAME="Buildroot 2025.02"
	
	I0929 11:30:46.959537  595895 main.go:141] libmachine: found compatible host: buildroot
	I0929 11:30:46.959560  595895 main.go:141] libmachine: Provisioning with buildroot...
	I0929 11:30:46.959572  595895 main.go:141] libmachine: (addons-214441) Calling .GetMachineName
	I0929 11:30:46.959897  595895 buildroot.go:166] provisioning hostname "addons-214441"
	I0929 11:30:46.959920  595895 main.go:141] libmachine: (addons-214441) Calling .GetMachineName
	I0929 11:30:46.960158  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHHostname
	I0929 11:30:46.963429  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:46.963851  595895 main.go:141] libmachine: (addons-214441) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:9c:d8", ip: ""} in network mk-addons-214441: {Iface:virbr1 ExpiryTime:2025-09-29 12:30:43 +0000 UTC Type:0 Mac:52:54:00:98:9c:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:addons-214441 Clientid:01:52:54:00:98:9c:d8}
	I0929 11:30:46.963892  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined IP address 192.168.39.76 and MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:46.964187  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHPort
	I0929 11:30:46.964389  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHKeyPath
	I0929 11:30:46.964590  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHKeyPath
	I0929 11:30:46.964750  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHUsername
	I0929 11:30:46.964942  595895 main.go:141] libmachine: Using SSH client type: native
	I0929 11:30:46.965188  595895 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.39.76 22 <nil> <nil>}
	I0929 11:30:46.965202  595895 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-214441 && echo "addons-214441" | sudo tee /etc/hostname
	I0929 11:30:47.092132  595895 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-214441
	
	I0929 11:30:47.092159  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHHostname
	I0929 11:30:47.095605  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:47.096136  595895 main.go:141] libmachine: (addons-214441) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:9c:d8", ip: ""} in network mk-addons-214441: {Iface:virbr1 ExpiryTime:2025-09-29 12:30:43 +0000 UTC Type:0 Mac:52:54:00:98:9c:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:addons-214441 Clientid:01:52:54:00:98:9c:d8}
	I0929 11:30:47.096169  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined IP address 192.168.39.76 and MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:47.096340  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHPort
	I0929 11:30:47.096555  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHKeyPath
	I0929 11:30:47.096747  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHKeyPath
	I0929 11:30:47.096902  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHUsername
	I0929 11:30:47.097123  595895 main.go:141] libmachine: Using SSH client type: native
	I0929 11:30:47.097351  595895 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.39.76 22 <nil> <nil>}
	I0929 11:30:47.097369  595895 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-214441' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-214441/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-214441' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0929 11:30:47.216048  595895 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0929 11:30:47.216081  595895 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21654-591397/.minikube CaCertPath:/home/jenkins/minikube-integration/21654-591397/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21654-591397/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21654-591397/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21654-591397/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21654-591397/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21654-591397/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21654-591397/.minikube}
	I0929 11:30:47.216160  595895 buildroot.go:174] setting up certificates
	I0929 11:30:47.216176  595895 provision.go:84] configureAuth start
	I0929 11:30:47.216187  595895 main.go:141] libmachine: (addons-214441) Calling .GetMachineName
	I0929 11:30:47.216551  595895 main.go:141] libmachine: (addons-214441) Calling .GetIP
	I0929 11:30:47.219822  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:47.220206  595895 main.go:141] libmachine: (addons-214441) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:9c:d8", ip: ""} in network mk-addons-214441: {Iface:virbr1 ExpiryTime:2025-09-29 12:30:43 +0000 UTC Type:0 Mac:52:54:00:98:9c:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:addons-214441 Clientid:01:52:54:00:98:9c:d8}
	I0929 11:30:47.220241  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined IP address 192.168.39.76 and MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:47.220424  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHHostname
	I0929 11:30:47.222925  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:47.223320  595895 main.go:141] libmachine: (addons-214441) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:9c:d8", ip: ""} in network mk-addons-214441: {Iface:virbr1 ExpiryTime:2025-09-29 12:30:43 +0000 UTC Type:0 Mac:52:54:00:98:9c:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:addons-214441 Clientid:01:52:54:00:98:9c:d8}
	I0929 11:30:47.223351  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined IP address 192.168.39.76 and MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:47.223603  595895 provision.go:143] copyHostCerts
	I0929 11:30:47.223674  595895 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21654-591397/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21654-591397/.minikube/cert.pem (1123 bytes)
	I0929 11:30:47.223815  595895 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21654-591397/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21654-591397/.minikube/key.pem (1675 bytes)
	I0929 11:30:47.223908  595895 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21654-591397/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21654-591397/.minikube/ca.pem (1082 bytes)
	I0929 11:30:47.223987  595895 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21654-591397/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21654-591397/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21654-591397/.minikube/certs/ca-key.pem org=jenkins.addons-214441 san=[127.0.0.1 192.168.39.76 addons-214441 localhost minikube]
	I0929 11:30:47.541100  595895 provision.go:177] copyRemoteCerts
	I0929 11:30:47.541199  595895 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0929 11:30:47.541238  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHHostname
	I0929 11:30:47.544486  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:47.544940  595895 main.go:141] libmachine: (addons-214441) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:9c:d8", ip: ""} in network mk-addons-214441: {Iface:virbr1 ExpiryTime:2025-09-29 12:30:43 +0000 UTC Type:0 Mac:52:54:00:98:9c:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:addons-214441 Clientid:01:52:54:00:98:9c:d8}
	I0929 11:30:47.545024  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined IP address 192.168.39.76 and MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:47.545286  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHPort
	I0929 11:30:47.545574  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHKeyPath
	I0929 11:30:47.545766  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHUsername
	I0929 11:30:47.545940  595895 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21654-591397/.minikube/machines/addons-214441/id_rsa Username:docker}
	I0929 11:30:47.632441  595895 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21654-591397/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0929 11:30:47.665928  595895 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21654-591397/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0929 11:30:47.699464  595895 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21654-591397/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0929 11:30:47.731874  595895 provision.go:87] duration metric: took 515.680125ms to configureAuth
	I0929 11:30:47.731904  595895 buildroot.go:189] setting minikube options for container-runtime
	I0929 11:30:47.732120  595895 config.go:182] Loaded profile config "addons-214441": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0929 11:30:47.732187  595895 main.go:141] libmachine: (addons-214441) Calling .DriverName
	I0929 11:30:47.732484  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHHostname
	I0929 11:30:47.735606  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:47.736098  595895 main.go:141] libmachine: (addons-214441) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:9c:d8", ip: ""} in network mk-addons-214441: {Iface:virbr1 ExpiryTime:2025-09-29 12:30:43 +0000 UTC Type:0 Mac:52:54:00:98:9c:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:addons-214441 Clientid:01:52:54:00:98:9c:d8}
	I0929 11:30:47.736147  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined IP address 192.168.39.76 and MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:47.736408  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHPort
	I0929 11:30:47.736676  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHKeyPath
	I0929 11:30:47.736876  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHKeyPath
	I0929 11:30:47.737026  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHUsername
	I0929 11:30:47.737286  595895 main.go:141] libmachine: Using SSH client type: native
	I0929 11:30:47.737503  595895 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.39.76 22 <nil> <nil>}
	I0929 11:30:47.737522  595895 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0929 11:30:47.845243  595895 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0929 11:30:47.845278  595895 buildroot.go:70] root file system type: tmpfs
	I0929 11:30:47.845464  595895 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0929 11:30:47.845493  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHHostname
	I0929 11:30:47.848685  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:47.849080  595895 main.go:141] libmachine: (addons-214441) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:9c:d8", ip: ""} in network mk-addons-214441: {Iface:virbr1 ExpiryTime:2025-09-29 12:30:43 +0000 UTC Type:0 Mac:52:54:00:98:9c:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:addons-214441 Clientid:01:52:54:00:98:9c:d8}
	I0929 11:30:47.849125  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined IP address 192.168.39.76 and MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:47.849333  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHPort
	I0929 11:30:47.849561  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHKeyPath
	I0929 11:30:47.849749  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHKeyPath
	I0929 11:30:47.849921  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHUsername
	I0929 11:30:47.850156  595895 main.go:141] libmachine: Using SSH client type: native
	I0929 11:30:47.850438  595895 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.39.76 22 <nil> <nil>}
	I0929 11:30:47.850513  595895 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target minikube-automount.service nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	[Service]
	Type=notify
	Restart=always
	
	
	
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0929 11:30:47.980841  595895 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target minikube-automount.service nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	[Service]
	Type=notify
	Restart=always
	
	
	
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I0929 11:30:47.980885  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHHostname
	I0929 11:30:47.984021  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:47.984467  595895 main.go:141] libmachine: (addons-214441) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:9c:d8", ip: ""} in network mk-addons-214441: {Iface:virbr1 ExpiryTime:2025-09-29 12:30:43 +0000 UTC Type:0 Mac:52:54:00:98:9c:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:addons-214441 Clientid:01:52:54:00:98:9c:d8}
	I0929 11:30:47.984505  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined IP address 192.168.39.76 and MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:47.984746  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHPort
	I0929 11:30:47.984964  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHKeyPath
	I0929 11:30:47.985145  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHKeyPath
	I0929 11:30:47.985345  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHUsername
	I0929 11:30:47.985533  595895 main.go:141] libmachine: Using SSH client type: native
	I0929 11:30:47.985753  595895 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.39.76 22 <nil> <nil>}
	I0929 11:30:47.985769  595895 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0929 11:30:48.944806  595895 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink '/etc/systemd/system/multi-user.target.wants/docker.service' → '/usr/lib/systemd/system/docker.service'.
	
	I0929 11:30:48.944837  595895 main.go:141] libmachine: Checking connection to Docker...
	I0929 11:30:48.944847  595895 main.go:141] libmachine: (addons-214441) Calling .GetURL
	I0929 11:30:48.946423  595895 main.go:141] libmachine: (addons-214441) DBG | using libvirt version 8000000
	I0929 11:30:48.949334  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:48.949705  595895 main.go:141] libmachine: (addons-214441) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:9c:d8", ip: ""} in network mk-addons-214441: {Iface:virbr1 ExpiryTime:2025-09-29 12:30:43 +0000 UTC Type:0 Mac:52:54:00:98:9c:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:addons-214441 Clientid:01:52:54:00:98:9c:d8}
	I0929 11:30:48.949727  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined IP address 192.168.39.76 and MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:48.949905  595895 main.go:141] libmachine: Docker is up and running!
	I0929 11:30:48.949918  595895 main.go:141] libmachine: Reticulating splines...
	I0929 11:30:48.949926  595895 client.go:171] duration metric: took 22.382562322s to LocalClient.Create
	I0929 11:30:48.949961  595895 start.go:167] duration metric: took 22.382646372s to libmachine.API.Create "addons-214441"
	I0929 11:30:48.949977  595895 start.go:293] postStartSetup for "addons-214441" (driver="kvm2")
	I0929 11:30:48.949995  595895 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0929 11:30:48.950016  595895 main.go:141] libmachine: (addons-214441) Calling .DriverName
	I0929 11:30:48.950285  595895 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0929 11:30:48.950309  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHHostname
	I0929 11:30:48.952588  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:48.952941  595895 main.go:141] libmachine: (addons-214441) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:9c:d8", ip: ""} in network mk-addons-214441: {Iface:virbr1 ExpiryTime:2025-09-29 12:30:43 +0000 UTC Type:0 Mac:52:54:00:98:9c:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:addons-214441 Clientid:01:52:54:00:98:9c:d8}
	I0929 11:30:48.952973  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined IP address 192.168.39.76 and MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:48.953140  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHPort
	I0929 11:30:48.953358  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHKeyPath
	I0929 11:30:48.953522  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHUsername
	I0929 11:30:48.953678  595895 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21654-591397/.minikube/machines/addons-214441/id_rsa Username:docker}
	I0929 11:30:49.038834  595895 ssh_runner.go:195] Run: cat /etc/os-release
	I0929 11:30:49.044530  595895 info.go:137] Remote host: Buildroot 2025.02
	I0929 11:30:49.044562  595895 filesync.go:126] Scanning /home/jenkins/minikube-integration/21654-591397/.minikube/addons for local assets ...
	I0929 11:30:49.044653  595895 filesync.go:126] Scanning /home/jenkins/minikube-integration/21654-591397/.minikube/files for local assets ...
	I0929 11:30:49.044700  595895 start.go:296] duration metric: took 94.715435ms for postStartSetup
	I0929 11:30:49.044748  595895 main.go:141] libmachine: (addons-214441) Calling .GetConfigRaw
	I0929 11:30:49.045427  595895 main.go:141] libmachine: (addons-214441) Calling .GetIP
	I0929 11:30:49.048440  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:49.048801  595895 main.go:141] libmachine: (addons-214441) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:9c:d8", ip: ""} in network mk-addons-214441: {Iface:virbr1 ExpiryTime:2025-09-29 12:30:43 +0000 UTC Type:0 Mac:52:54:00:98:9c:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:addons-214441 Clientid:01:52:54:00:98:9c:d8}
	I0929 11:30:49.048825  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined IP address 192.168.39.76 and MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:49.049194  595895 profile.go:143] Saving config to /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/addons-214441/config.json ...
	I0929 11:30:49.049405  595895 start.go:128] duration metric: took 22.499712752s to createHost
	I0929 11:30:49.049432  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHHostname
	I0929 11:30:49.052122  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:49.052625  595895 main.go:141] libmachine: (addons-214441) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:9c:d8", ip: ""} in network mk-addons-214441: {Iface:virbr1 ExpiryTime:2025-09-29 12:30:43 +0000 UTC Type:0 Mac:52:54:00:98:9c:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:addons-214441 Clientid:01:52:54:00:98:9c:d8}
	I0929 11:30:49.052654  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined IP address 192.168.39.76 and MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:49.052915  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHPort
	I0929 11:30:49.053180  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHKeyPath
	I0929 11:30:49.053373  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHKeyPath
	I0929 11:30:49.053538  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHUsername
	I0929 11:30:49.053724  595895 main.go:141] libmachine: Using SSH client type: native
	I0929 11:30:49.053929  595895 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.39.76 22 <nil> <nil>}
	I0929 11:30:49.053940  595895 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0929 11:30:49.163416  595895 main.go:141] libmachine: SSH cmd err, output: <nil>: 1759145449.126116077
	
	I0929 11:30:49.163441  595895 fix.go:216] guest clock: 1759145449.126116077
	I0929 11:30:49.163449  595895 fix.go:229] Guest: 2025-09-29 11:30:49.126116077 +0000 UTC Remote: 2025-09-29 11:30:49.049418276 +0000 UTC m=+22.624163516 (delta=76.697801ms)
	I0929 11:30:49.163493  595895 fix.go:200] guest clock delta is within tolerance: 76.697801ms
	I0929 11:30:49.163499  595895 start.go:83] releasing machines lock for "addons-214441", held for 22.613874794s
	I0929 11:30:49.163528  595895 main.go:141] libmachine: (addons-214441) Calling .DriverName
	I0929 11:30:49.163838  595895 main.go:141] libmachine: (addons-214441) Calling .GetIP
	I0929 11:30:49.166822  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:49.167209  595895 main.go:141] libmachine: (addons-214441) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:9c:d8", ip: ""} in network mk-addons-214441: {Iface:virbr1 ExpiryTime:2025-09-29 12:30:43 +0000 UTC Type:0 Mac:52:54:00:98:9c:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:addons-214441 Clientid:01:52:54:00:98:9c:d8}
	I0929 11:30:49.167249  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined IP address 192.168.39.76 and MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:49.167420  595895 main.go:141] libmachine: (addons-214441) Calling .DriverName
	I0929 11:30:49.168022  595895 main.go:141] libmachine: (addons-214441) Calling .DriverName
	I0929 11:30:49.168252  595895 main.go:141] libmachine: (addons-214441) Calling .DriverName
	I0929 11:30:49.168368  595895 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0929 11:30:49.168430  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHHostname
	I0929 11:30:49.168489  595895 ssh_runner.go:195] Run: cat /version.json
	I0929 11:30:49.168513  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHHostname
	I0929 11:30:49.172018  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:49.172253  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:49.172513  595895 main.go:141] libmachine: (addons-214441) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:9c:d8", ip: ""} in network mk-addons-214441: {Iface:virbr1 ExpiryTime:2025-09-29 12:30:43 +0000 UTC Type:0 Mac:52:54:00:98:9c:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:addons-214441 Clientid:01:52:54:00:98:9c:d8}
	I0929 11:30:49.172540  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined IP address 192.168.39.76 and MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:49.172628  595895 main.go:141] libmachine: (addons-214441) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:9c:d8", ip: ""} in network mk-addons-214441: {Iface:virbr1 ExpiryTime:2025-09-29 12:30:43 +0000 UTC Type:0 Mac:52:54:00:98:9c:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:addons-214441 Clientid:01:52:54:00:98:9c:d8}
	I0929 11:30:49.172666  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined IP address 192.168.39.76 and MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:49.172701  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHPort
	I0929 11:30:49.172958  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHPort
	I0929 11:30:49.173000  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHKeyPath
	I0929 11:30:49.173136  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHKeyPath
	I0929 11:30:49.173213  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHUsername
	I0929 11:30:49.173301  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHUsername
	I0929 11:30:49.173395  595895 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21654-591397/.minikube/machines/addons-214441/id_rsa Username:docker}
	I0929 11:30:49.173457  595895 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21654-591397/.minikube/machines/addons-214441/id_rsa Username:docker}
	I0929 11:30:49.251709  595895 ssh_runner.go:195] Run: systemctl --version
	I0929 11:30:49.275600  595895 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0929 11:30:49.282636  595895 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0929 11:30:49.282710  595895 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0929 11:30:49.304880  595895 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0929 11:30:49.304913  595895 start.go:495] detecting cgroup driver to use...
	I0929 11:30:49.305043  595895 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0929 11:30:49.330757  595895 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0929 11:30:49.345061  595895 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0929 11:30:49.359226  595895 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0929 11:30:49.359329  595895 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0929 11:30:49.373874  595895 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0929 11:30:49.388075  595895 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0929 11:30:49.401811  595895 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0929 11:30:49.415626  595895 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0929 11:30:49.431189  595895 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0929 11:30:49.445445  595895 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0929 11:30:49.459477  595895 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0929 11:30:49.473176  595895 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0929 11:30:49.485689  595895 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0929 11:30:49.485783  595895 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0929 11:30:49.499975  595895 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0929 11:30:49.513013  595895 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 11:30:49.660311  595895 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0929 11:30:49.703655  595895 start.go:495] detecting cgroup driver to use...
	I0929 11:30:49.703755  595895 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0929 11:30:49.722813  595895 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0929 11:30:49.750032  595895 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0929 11:30:49.777529  595895 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0929 11:30:49.795732  595895 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0929 11:30:49.813375  595895 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0929 11:30:49.851205  595895 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0929 11:30:49.869489  595895 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0929 11:30:49.896122  595895 ssh_runner.go:195] Run: which cri-dockerd
	I0929 11:30:49.900877  595895 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0929 11:30:49.914013  595895 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I0929 11:30:49.937663  595895 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0929 11:30:50.087078  595895 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0929 11:30:50.258242  595895 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I0929 11:30:50.258407  595895 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0929 11:30:50.281600  595895 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0929 11:30:50.297843  595895 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 11:30:50.442188  595895 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0929 11:30:51.468324  595895 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.026092315s)
	I0929 11:30:51.468405  595895 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0929 11:30:51.485284  595895 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0929 11:30:51.502338  595895 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0929 11:30:51.520247  595895 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0929 11:30:51.674618  595895 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0929 11:30:51.823542  595895 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 11:30:51.969743  595895 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0929 11:30:52.010885  595895 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0929 11:30:52.027992  595895 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 11:30:52.187556  595895 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0929 11:30:52.300820  595895 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0929 11:30:52.324658  595895 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0929 11:30:52.324786  595895 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0929 11:30:52.331994  595895 start.go:563] Will wait 60s for crictl version
	I0929 11:30:52.332070  595895 ssh_runner.go:195] Run: which crictl
	I0929 11:30:52.336923  595895 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0929 11:30:52.378177  595895 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.4.0
	RuntimeApiVersion:  v1
	I0929 11:30:52.378280  595895 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0929 11:30:52.410851  595895 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0929 11:30:52.543475  595895 out.go:252] * Preparing Kubernetes v1.34.0 on Docker 28.4.0 ...
	I0929 11:30:52.543553  595895 main.go:141] libmachine: (addons-214441) Calling .GetIP
	I0929 11:30:52.546859  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:52.547288  595895 main.go:141] libmachine: (addons-214441) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:9c:d8", ip: ""} in network mk-addons-214441: {Iface:virbr1 ExpiryTime:2025-09-29 12:30:43 +0000 UTC Type:0 Mac:52:54:00:98:9c:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:addons-214441 Clientid:01:52:54:00:98:9c:d8}
	I0929 11:30:52.547313  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined IP address 192.168.39.76 and MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:52.547612  595895 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0929 11:30:52.553031  595895 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0929 11:30:52.570843  595895 kubeadm.go:875] updating cluster {Name:addons-214441 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-214
441 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.76 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fal
se DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0929 11:30:52.570982  595895 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0929 11:30:52.571045  595895 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0929 11:30:52.589813  595895 docker.go:691] Got preloaded images: 
	I0929 11:30:52.589850  595895 docker.go:697] registry.k8s.io/kube-apiserver:v1.34.0 wasn't preloaded
	I0929 11:30:52.589920  595895 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0929 11:30:52.603859  595895 ssh_runner.go:195] Run: which lz4
	I0929 11:30:52.608929  595895 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0929 11:30:52.614449  595895 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0929 11:30:52.614480  595895 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21654-591397/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (353447550 bytes)
	I0929 11:30:54.030641  595895 docker.go:655] duration metric: took 1.421784291s to copy over tarball
	I0929 11:30:54.030729  595895 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0929 11:30:55.448691  595895 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.417923545s)
	I0929 11:30:55.448737  595895 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0929 11:30:55.496341  595895 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0929 11:30:55.514175  595895 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2632 bytes)
	I0929 11:30:55.539628  595895 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0929 11:30:55.556201  595895 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 11:30:55.705196  595895 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0929 11:30:57.773379  595895 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.068131004s)
	I0929 11:30:57.773509  595895 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0929 11:30:57.795878  595895 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.0
	registry.k8s.io/kube-controller-manager:v1.34.0
	registry.k8s.io/kube-proxy:v1.34.0
	registry.k8s.io/kube-scheduler:v1.34.0
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0929 11:30:57.795910  595895 cache_images.go:85] Images are preloaded, skipping loading
	I0929 11:30:57.795931  595895 kubeadm.go:926] updating node { 192.168.39.76 8443 v1.34.0 docker true true} ...
	I0929 11:30:57.796049  595895 kubeadm.go:938] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-214441 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.76
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:addons-214441 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0929 11:30:57.796127  595895 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0929 11:30:57.852690  595895 cni.go:84] Creating CNI manager for ""
	I0929 11:30:57.852756  595895 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0929 11:30:57.852774  595895 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0929 11:30:57.852803  595895 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.76 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-214441 NodeName:addons-214441 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.76"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.76 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/ku
bernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0929 11:30:57.852981  595895 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.76
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-214441"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.76"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.76"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0929 11:30:57.853053  595895 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0929 11:30:57.866164  595895 binaries.go:44] Found k8s binaries, skipping transfer
	I0929 11:30:57.866236  595895 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0929 11:30:57.879054  595895 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0929 11:30:57.901136  595895 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0929 11:30:57.922808  595895 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2217 bytes)
	I0929 11:30:57.944391  595895 ssh_runner.go:195] Run: grep 192.168.39.76	control-plane.minikube.internal$ /etc/hosts
	I0929 11:30:57.949077  595895 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.76	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0929 11:30:57.965713  595895 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 11:30:58.115608  595895 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0929 11:30:58.151915  595895 certs.go:68] Setting up /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/addons-214441 for IP: 192.168.39.76
	I0929 11:30:58.151940  595895 certs.go:194] generating shared ca certs ...
	I0929 11:30:58.151960  595895 certs.go:226] acquiring lock for ca certs: {Name:mk707c73ecd79d5343eca8617a792346e0c7ccb4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 11:30:58.152119  595895 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21654-591397/.minikube/ca.key
	I0929 11:30:58.470474  595895 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21654-591397/.minikube/ca.crt ...
	I0929 11:30:58.470507  595895 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21654-591397/.minikube/ca.crt: {Name:mk182656d7edea57f023d2e0db199cb4225a8b4e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 11:30:58.470704  595895 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21654-591397/.minikube/ca.key ...
	I0929 11:30:58.470715  595895 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21654-591397/.minikube/ca.key: {Name:mkd9949b3876b9f68542fba6d581787f4502134f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 11:30:58.470791  595895 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21654-591397/.minikube/proxy-client-ca.key
	I0929 11:30:58.721631  595895 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21654-591397/.minikube/proxy-client-ca.crt ...
	I0929 11:30:58.721664  595895 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21654-591397/.minikube/proxy-client-ca.crt: {Name:mk28d9b982dd4335b19ce60c764e1cd1a4d53764 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 11:30:58.721838  595895 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21654-591397/.minikube/proxy-client-ca.key ...
	I0929 11:30:58.721850  595895 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21654-591397/.minikube/proxy-client-ca.key: {Name:mk92f9d60795b7f581dcb4003e857f2fb68fb997 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 11:30:58.721920  595895 certs.go:256] generating profile certs ...
	I0929 11:30:58.721989  595895 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/addons-214441/client.key
	I0929 11:30:58.722004  595895 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/addons-214441/client.crt with IP's: []
	I0929 11:30:59.043304  595895 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/addons-214441/client.crt ...
	I0929 11:30:59.043336  595895 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/addons-214441/client.crt: {Name:mkd724da95490eed1b0581ef6c65a2b1785468b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 11:30:59.043499  595895 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/addons-214441/client.key ...
	I0929 11:30:59.043510  595895 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/addons-214441/client.key: {Name:mkba543125a928af6b44a2eb304c49514c816581 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 11:30:59.043578  595895 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/addons-214441/apiserver.key.adbef5ab
	I0929 11:30:59.043598  595895 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/addons-214441/apiserver.crt.adbef5ab with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.76]
	I0929 11:30:59.456164  595895 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/addons-214441/apiserver.crt.adbef5ab ...
	I0929 11:30:59.456200  595895 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/addons-214441/apiserver.crt.adbef5ab: {Name:mk5a23687be38fbd7ef5257880d1d7f5b199f933 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 11:30:59.456424  595895 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/addons-214441/apiserver.key.adbef5ab ...
	I0929 11:30:59.456443  595895 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/addons-214441/apiserver.key.adbef5ab: {Name:mke7b9b847497d2728644e9b30a8393a50e57e5a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 11:30:59.456526  595895 certs.go:381] copying /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/addons-214441/apiserver.crt.adbef5ab -> /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/addons-214441/apiserver.crt
	I0929 11:30:59.456638  595895 certs.go:385] copying /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/addons-214441/apiserver.key.adbef5ab -> /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/addons-214441/apiserver.key
	I0929 11:30:59.456705  595895 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/addons-214441/proxy-client.key
	I0929 11:30:59.456726  595895 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/addons-214441/proxy-client.crt with IP's: []
	I0929 11:30:59.785388  595895 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/addons-214441/proxy-client.crt ...
	I0929 11:30:59.785424  595895 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/addons-214441/proxy-client.crt: {Name:mkb2afc6ab3119c9842fe1ce2f48d7c6196dbfb1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 11:30:59.785611  595895 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/addons-214441/proxy-client.key ...
	I0929 11:30:59.785642  595895 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/addons-214441/proxy-client.key: {Name:mk6b37b3ae22881d553c47031d96c6f22bdfded2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 11:30:59.785833  595895 certs.go:484] found cert: /home/jenkins/minikube-integration/21654-591397/.minikube/certs/ca-key.pem (1675 bytes)
	I0929 11:30:59.785879  595895 certs.go:484] found cert: /home/jenkins/minikube-integration/21654-591397/.minikube/certs/ca.pem (1082 bytes)
	I0929 11:30:59.785905  595895 certs.go:484] found cert: /home/jenkins/minikube-integration/21654-591397/.minikube/certs/cert.pem (1123 bytes)
	I0929 11:30:59.785932  595895 certs.go:484] found cert: /home/jenkins/minikube-integration/21654-591397/.minikube/certs/key.pem (1675 bytes)
	I0929 11:30:59.786662  595895 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21654-591397/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0929 11:30:59.821270  595895 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21654-591397/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0929 11:30:59.853588  595895 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21654-591397/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0929 11:30:59.885559  595895 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21654-591397/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0929 11:30:59.916538  595895 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/addons-214441/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0929 11:30:59.948991  595895 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/addons-214441/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0929 11:30:59.981478  595895 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/addons-214441/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0929 11:31:00.014753  595895 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/addons-214441/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0929 11:31:00.046891  595895 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21654-591397/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0929 11:31:00.079370  595895 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0929 11:31:00.101600  595895 ssh_runner.go:195] Run: openssl version
	I0929 11:31:00.108829  595895 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0929 11:31:00.123448  595895 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0929 11:31:00.129416  595895 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 29 11:30 /usr/share/ca-certificates/minikubeCA.pem
	I0929 11:31:00.129502  595895 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0929 11:31:00.137583  595895 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0929 11:31:00.152396  595895 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0929 11:31:00.157895  595895 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0929 11:31:00.157960  595895 kubeadm.go:392] StartCluster: {Name:addons-214441 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-214441
Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.76 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 11:31:00.158083  595895 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0929 11:31:00.176917  595895 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0929 11:31:00.190119  595895 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0929 11:31:00.203558  595895 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0929 11:31:00.216736  595895 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0929 11:31:00.216758  595895 kubeadm.go:157] found existing configuration files:
	
	I0929 11:31:00.216805  595895 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0929 11:31:00.229008  595895 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0929 11:31:00.229138  595895 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0929 11:31:00.242441  595895 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0929 11:31:00.254460  595895 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0929 11:31:00.254523  595895 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0929 11:31:00.268124  595895 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0929 11:31:00.284523  595895 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0929 11:31:00.284596  595895 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0929 11:31:00.297510  595895 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0929 11:31:00.311858  595895 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0929 11:31:00.311927  595895 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0929 11:31:00.329319  595895 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0929 11:31:00.392668  595895 kubeadm.go:310] [init] Using Kubernetes version: v1.34.0
	I0929 11:31:00.392776  595895 kubeadm.go:310] [preflight] Running pre-flight checks
	I0929 11:31:00.500945  595895 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0929 11:31:00.501073  595895 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0929 11:31:00.501248  595895 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0929 11:31:00.518470  595895 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0929 11:31:00.521672  595895 out.go:252]   - Generating certificates and keys ...
	I0929 11:31:00.521778  595895 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0929 11:31:00.521835  595895 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0929 11:31:00.844406  595895 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0929 11:31:01.356940  595895 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0929 11:31:01.469316  595895 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0929 11:31:01.609628  595895 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0929 11:31:01.854048  595895 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0929 11:31:01.854239  595895 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-214441 localhost] and IPs [192.168.39.76 127.0.0.1 ::1]
	I0929 11:31:02.222219  595895 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0929 11:31:02.222361  595895 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-214441 localhost] and IPs [192.168.39.76 127.0.0.1 ::1]
	I0929 11:31:02.331774  595895 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0929 11:31:02.452417  595895 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0929 11:31:03.277600  595895 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0929 11:31:03.277709  595895 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0929 11:31:03.337296  595895 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0929 11:31:03.576740  595895 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0929 11:31:03.754957  595895 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0929 11:31:04.028596  595895 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0929 11:31:04.458901  595895 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0929 11:31:04.459731  595895 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0929 11:31:04.461956  595895 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0929 11:31:04.463895  595895 out.go:252]   - Booting up control plane ...
	I0929 11:31:04.464031  595895 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0929 11:31:04.464116  595895 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0929 11:31:04.464220  595895 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0929 11:31:04.482430  595895 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0929 11:31:04.482595  595895 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0929 11:31:04.490659  595895 kubeadm.go:310] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0929 11:31:04.490827  595895 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0929 11:31:04.490920  595895 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0929 11:31:04.666361  595895 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0929 11:31:04.666495  595895 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0929 11:31:05.175870  595895 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 510.006022ms
	I0929 11:31:05.187944  595895 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0929 11:31:05.188057  595895 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.39.76:8443/livez
	I0929 11:31:05.188256  595895 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0929 11:31:05.188362  595895 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0929 11:31:07.767053  595895 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 2.579446651s
	I0929 11:31:09.215755  595895 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 4.029766048s
	I0929 11:31:11.189186  595895 kubeadm.go:310] [control-plane-check] kube-apiserver is healthy after 6.002998119s
	I0929 11:31:11.214239  595895 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0929 11:31:11.232892  595895 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0929 11:31:11.255389  595895 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0929 11:31:11.255580  595895 kubeadm.go:310] [mark-control-plane] Marking the node addons-214441 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0929 11:31:11.270844  595895 kubeadm.go:310] [bootstrap-token] Using token: 7wgemt.sdnt4jx2dgy9ll51
	I0929 11:31:11.272442  595895 out.go:252]   - Configuring RBAC rules ...
	I0929 11:31:11.272557  595895 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0929 11:31:11.279364  595895 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0929 11:31:11.294463  595895 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0929 11:31:11.298793  595895 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0929 11:31:11.306582  595895 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0929 11:31:11.323727  595895 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0929 11:31:11.601710  595895 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0929 11:31:12.069553  595895 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0929 11:31:12.597044  595895 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0929 11:31:12.597931  595895 kubeadm.go:310] 
	I0929 11:31:12.598017  595895 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0929 11:31:12.598026  595895 kubeadm.go:310] 
	I0929 11:31:12.598142  595895 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0929 11:31:12.598153  595895 kubeadm.go:310] 
	I0929 11:31:12.598181  595895 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0929 11:31:12.598281  595895 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0929 11:31:12.598374  595895 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0929 11:31:12.598390  595895 kubeadm.go:310] 
	I0929 11:31:12.598436  595895 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0929 11:31:12.598442  595895 kubeadm.go:310] 
	I0929 11:31:12.598481  595895 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0929 11:31:12.598497  595895 kubeadm.go:310] 
	I0929 11:31:12.598577  595895 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0929 11:31:12.598692  595895 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0929 11:31:12.598809  595895 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0929 11:31:12.598828  595895 kubeadm.go:310] 
	I0929 11:31:12.598937  595895 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0929 11:31:12.599041  595895 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0929 11:31:12.599055  595895 kubeadm.go:310] 
	I0929 11:31:12.599196  595895 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 7wgemt.sdnt4jx2dgy9ll51 \
	I0929 11:31:12.599332  595895 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f2c60916dfe453d33b3c30594ac6ac02ea520a59a4bdac7119cd9e587175e1eb \
	I0929 11:31:12.599365  595895 kubeadm.go:310] 	--control-plane 
	I0929 11:31:12.599397  595895 kubeadm.go:310] 
	I0929 11:31:12.599486  595895 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0929 11:31:12.599496  595895 kubeadm.go:310] 
	I0929 11:31:12.599568  595895 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 7wgemt.sdnt4jx2dgy9ll51 \
	I0929 11:31:12.599705  595895 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f2c60916dfe453d33b3c30594ac6ac02ea520a59a4bdac7119cd9e587175e1eb 
	I0929 11:31:12.601217  595895 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0929 11:31:12.601272  595895 cni.go:84] Creating CNI manager for ""
	I0929 11:31:12.601305  595895 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0929 11:31:12.603223  595895 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I0929 11:31:12.604766  595895 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0929 11:31:12.618554  595895 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0929 11:31:12.641768  595895 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0929 11:31:12.641942  595895 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 11:31:12.641954  595895 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-214441 minikube.k8s.io/updated_at=2025_09_29T11_31_12_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=b8c533eb3dce8338d3d4e7231ea97d1c44ed6f81 minikube.k8s.io/name=addons-214441 minikube.k8s.io/primary=true
	I0929 11:31:12.682767  595895 ops.go:34] apiserver oom_adj: -16
	I0929 11:31:12.800130  595895 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 11:31:13.300439  595895 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 11:31:13.800339  595895 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 11:31:14.300644  595895 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 11:31:14.800381  595895 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 11:31:15.301049  595895 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 11:31:15.801207  595895 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 11:31:16.301226  595895 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 11:31:16.801024  595895 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 11:31:17.300849  595895 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 11:31:17.440215  595895 kubeadm.go:1105] duration metric: took 4.798376612s to wait for elevateKubeSystemPrivileges
	I0929 11:31:17.440271  595895 kubeadm.go:394] duration metric: took 17.282308974s to StartCluster
	I0929 11:31:17.440297  595895 settings.go:142] acquiring lock: {Name:mk832bb073af4ae47756dd4494ea087d7aa99c2a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 11:31:17.440448  595895 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21654-591397/kubeconfig
	I0929 11:31:17.441186  595895 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21654-591397/kubeconfig: {Name:mk64b4db01785e3abeedb000f7d1263b1f56db2d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 11:31:17.441409  595895 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0929 11:31:17.441416  595895 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.76 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0929 11:31:17.441496  595895 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0929 11:31:17.441684  595895 config.go:182] Loaded profile config "addons-214441": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0929 11:31:17.441696  595895 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-214441"
	I0929 11:31:17.441708  595895 addons.go:69] Setting yakd=true in profile "addons-214441"
	I0929 11:31:17.441736  595895 addons.go:238] Setting addon yakd=true in "addons-214441"
	I0929 11:31:17.441757  595895 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-214441"
	I0929 11:31:17.441709  595895 addons.go:69] Setting ingress=true in profile "addons-214441"
	I0929 11:31:17.441784  595895 addons.go:238] Setting addon ingress=true in "addons-214441"
	I0929 11:31:17.441793  595895 host.go:66] Checking if "addons-214441" exists ...
	I0929 11:31:17.441803  595895 host.go:66] Checking if "addons-214441" exists ...
	I0929 11:31:17.441799  595895 addons.go:69] Setting default-storageclass=true in profile "addons-214441"
	I0929 11:31:17.441840  595895 addons.go:69] Setting gcp-auth=true in profile "addons-214441"
	I0929 11:31:17.441876  595895 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-214441"
	I0929 11:31:17.441886  595895 mustload.go:65] Loading cluster: addons-214441
	I0929 11:31:17.441893  595895 host.go:66] Checking if "addons-214441" exists ...
	I0929 11:31:17.442145  595895 addons.go:69] Setting registry=true in profile "addons-214441"
	I0929 11:31:17.442160  595895 addons.go:238] Setting addon registry=true in "addons-214441"
	I0929 11:31:17.442191  595895 host.go:66] Checking if "addons-214441" exists ...
	I0929 11:31:17.442280  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:31:17.442300  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:31:17.442353  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:31:17.442366  595895 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-214441"
	I0929 11:31:17.442371  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:31:17.442380  595895 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-214441"
	I0929 11:31:17.442381  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:31:17.442385  595895 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-214441"
	I0929 11:31:17.442396  595895 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-214441"
	I0929 11:31:17.442399  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:31:17.442425  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:31:17.442400  595895 addons.go:69] Setting cloud-spanner=true in profile "addons-214441"
	I0929 11:31:17.442448  595895 addons.go:69] Setting registry-creds=true in profile "addons-214441"
	I0929 11:31:17.442456  595895 addons.go:238] Setting addon cloud-spanner=true in "addons-214441"
	I0929 11:31:17.442469  595895 addons.go:238] Setting addon registry-creds=true in "addons-214441"
	I0929 11:31:17.442478  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:31:17.442491  595895 addons.go:69] Setting storage-provisioner=true in profile "addons-214441"
	I0929 11:31:17.442514  595895 addons.go:238] Setting addon storage-provisioner=true in "addons-214441"
	I0929 11:31:17.442543  595895 host.go:66] Checking if "addons-214441" exists ...
	I0929 11:31:17.442544  595895 addons.go:69] Setting inspektor-gadget=true in profile "addons-214441"
	I0929 11:31:17.442557  595895 addons.go:238] Setting addon inspektor-gadget=true in "addons-214441"
	I0929 11:31:17.442563  595895 addons.go:69] Setting ingress-dns=true in profile "addons-214441"
	I0929 11:31:17.442575  595895 addons.go:238] Setting addon ingress-dns=true in "addons-214441"
	I0929 11:31:17.442588  595895 addons.go:69] Setting metrics-server=true in profile "addons-214441"
	I0929 11:31:17.442591  595895 addons.go:69] Setting volumesnapshots=true in profile "addons-214441"
	I0929 11:31:17.442599  595895 addons.go:238] Setting addon metrics-server=true in "addons-214441"
	I0929 11:31:17.442610  595895 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-214441"
	I0929 11:31:17.442602  595895 config.go:182] Loaded profile config "addons-214441": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0929 11:31:17.442620  595895 addons.go:238] Setting addon volumesnapshots=true in "addons-214441"
	I0929 11:31:17.442622  595895 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-214441"
	I0929 11:31:17.442631  595895 addons.go:69] Setting volcano=true in profile "addons-214441"
	I0929 11:31:17.442647  595895 addons.go:238] Setting addon volcano=true in "addons-214441"
	I0929 11:31:17.442826  595895 host.go:66] Checking if "addons-214441" exists ...
	I0929 11:31:17.442847  595895 host.go:66] Checking if "addons-214441" exists ...
	I0929 11:31:17.442963  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:31:17.443004  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:31:17.443177  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:31:17.443198  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:31:17.443212  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:31:17.443242  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:31:17.443255  595895 host.go:66] Checking if "addons-214441" exists ...
	I0929 11:31:17.443270  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:31:17.443292  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:31:17.443439  595895 out.go:179] * Verifying Kubernetes components...
	I0929 11:31:17.443489  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:31:17.443521  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:31:17.443564  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:31:17.443603  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:31:17.443459  595895 host.go:66] Checking if "addons-214441" exists ...
	I0929 11:31:17.443699  595895 host.go:66] Checking if "addons-214441" exists ...
	I0929 11:31:17.443852  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:31:17.443879  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:31:17.443895  595895 host.go:66] Checking if "addons-214441" exists ...
	I0929 11:31:17.444137  595895 host.go:66] Checking if "addons-214441" exists ...
	I0929 11:31:17.444199  595895 host.go:66] Checking if "addons-214441" exists ...
	I0929 11:31:17.444468  595895 host.go:66] Checking if "addons-214441" exists ...
	I0929 11:31:17.454269  595895 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 11:31:17.455462  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:31:17.455556  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:31:17.457160  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:31:17.457213  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:31:17.458697  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:31:17.458765  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:31:17.459732  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37039
	I0929 11:31:17.459901  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:31:17.459979  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:31:17.460127  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:31:17.460161  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:31:17.460170  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:31:17.460239  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:31:17.460291  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44679
	I0929 11:31:17.460695  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:31:17.463901  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:31:17.463928  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:31:17.464092  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:31:17.465162  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:31:17.465408  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:31:17.466171  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:31:17.466824  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:31:17.467158  595895 main.go:141] libmachine: (addons-214441) Calling .GetState
	I0929 11:31:17.479447  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:31:17.479512  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:31:17.482323  595895 addons.go:238] Setting addon default-storageclass=true in "addons-214441"
	I0929 11:31:17.482391  595895 host.go:66] Checking if "addons-214441" exists ...
	I0929 11:31:17.482773  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:31:17.482798  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:31:17.493064  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45797
	I0929 11:31:17.493710  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40233
	I0929 11:31:17.496980  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:31:17.497697  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:31:17.497723  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:31:17.498583  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:31:17.499544  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:31:17.500891  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:31:17.502188  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:31:17.503325  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:31:17.503345  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:31:17.503676  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33541
	I0929 11:31:17.503826  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:31:17.504644  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:31:17.504730  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:31:17.505209  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:31:17.506256  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:31:17.506279  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:31:17.506340  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36303
	I0929 11:31:17.506984  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46129
	I0929 11:31:17.507294  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:31:17.507677  595895 main.go:141] libmachine: (addons-214441) Calling .GetState
	I0929 11:31:17.507745  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37135
	I0929 11:31:17.508552  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:31:17.509057  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:31:17.509394  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:31:17.509407  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:31:17.509415  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:31:17.510041  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:31:17.510142  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:31:17.510163  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:31:17.511579  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:31:17.513259  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:31:17.513521  595895 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-214441"
	I0929 11:31:17.513538  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46151
	I0929 11:31:17.513575  595895 host.go:66] Checking if "addons-214441" exists ...
	I0929 11:31:17.514124  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:31:17.514166  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:31:17.511927  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:31:17.514352  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:31:17.513596  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42277
	I0929 11:31:17.520718  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:31:17.520752  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44361
	I0929 11:31:17.521039  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:31:17.521092  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33947
	I0929 11:31:17.521207  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43367
	I0929 11:31:17.520724  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37739
	I0929 11:31:17.522317  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:31:17.522444  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:31:17.522469  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:31:17.522507  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:31:17.522852  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:31:17.522920  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:31:17.523211  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:31:17.523225  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:31:17.523306  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:31:17.523461  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:31:17.523473  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:31:17.524082  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:31:17.524376  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:31:17.524523  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:31:17.524535  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:31:17.524631  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:31:17.524746  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34633
	I0929 11:31:17.529249  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:31:17.529354  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:31:17.529387  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:31:17.529766  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:31:17.529766  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:31:17.529799  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:31:17.529807  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:31:17.529908  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:31:17.530061  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:31:17.530343  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:31:17.530371  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:31:17.530465  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:31:17.530878  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:31:17.530932  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:31:17.531382  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:31:17.531639  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:31:17.531658  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:31:17.532124  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:31:17.532483  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:31:17.533015  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:31:17.533033  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:31:17.533472  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:31:17.533508  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:31:17.534270  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:31:17.535229  595895 main.go:141] libmachine: (addons-214441) Calling .GetState
	I0929 11:31:17.535779  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:31:17.535886  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:31:17.537511  595895 host.go:66] Checking if "addons-214441" exists ...
	I0929 11:31:17.538187  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39035
	I0929 11:31:17.539952  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:31:17.540005  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:31:17.540222  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39063
	I0929 11:31:17.540575  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43245
	I0929 11:31:17.540786  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:31:17.540854  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:31:17.540890  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:31:17.541625  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:31:17.541647  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:31:17.542032  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:31:17.542195  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:31:17.542600  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:31:17.543176  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:31:17.543185  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:31:17.543199  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:31:17.543204  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:31:17.543307  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32865
	I0929 11:31:17.544136  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:31:17.544545  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:31:17.544610  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:31:17.544640  595895 main.go:141] libmachine: (addons-214441) Calling .GetState
	I0929 11:31:17.545415  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:31:17.545449  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:31:17.546464  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:31:17.546490  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:31:17.546965  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:31:17.547387  595895 main.go:141] libmachine: (addons-214441) Calling .GetState
	I0929 11:31:17.548714  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:31:17.548795  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:31:17.550669  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38149
	I0929 11:31:17.551412  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:31:17.551773  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34225
	I0929 11:31:17.552171  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:31:17.552255  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:31:17.552199  595895 main.go:141] libmachine: (addons-214441) Calling .DriverName
	I0929 11:31:17.552753  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:31:17.552854  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:31:17.553685  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:31:17.553778  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:31:17.554307  595895 main.go:141] libmachine: (addons-214441) Calling .GetState
	I0929 11:31:17.554514  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:31:17.555149  595895 main.go:141] libmachine: (addons-214441) Calling .GetState
	I0929 11:31:17.557383  595895 main.go:141] libmachine: (addons-214441) Calling .DriverName
	I0929 11:31:17.558025  595895 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0929 11:31:17.559210  595895 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0929 11:31:17.559231  595895 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0929 11:31:17.559262  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHHostname
	I0929 11:31:17.559338  595895 out.go:179]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.12.2
	I0929 11:31:17.560620  595895 main.go:141] libmachine: (addons-214441) Calling .DriverName
	I0929 11:31:17.560681  595895 out.go:179]   - Using image docker.io/volcanosh/vc-controller-manager:v1.12.2
	I0929 11:31:17.560823  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34665
	I0929 11:31:17.561393  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:31:17.562236  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:31:17.562295  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:31:17.562751  595895 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.3
	I0929 11:31:17.563140  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:31:17.563492  595895 out.go:179]   - Using image docker.io/volcanosh/vc-scheduler:v1.12.2
	I0929 11:31:17.564252  595895 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0929 11:31:17.564269  595895 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0929 11:31:17.564289  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:31:17.564293  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHHostname
	I0929 11:31:17.564684  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:31:17.564737  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:31:17.565023  595895 main.go:141] libmachine: (addons-214441) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:9c:d8", ip: ""} in network mk-addons-214441: {Iface:virbr1 ExpiryTime:2025-09-29 12:30:43 +0000 UTC Type:0 Mac:52:54:00:98:9c:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:addons-214441 Clientid:01:52:54:00:98:9c:d8}
	I0929 11:31:17.565146  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined IP address 192.168.39.76 and MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:31:17.567800  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHPort
	I0929 11:31:17.568057  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHKeyPath
	I0929 11:31:17.568262  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36805
	I0929 11:31:17.568522  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHUsername
	I0929 11:31:17.568701  595895 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21654-591397/.minikube/machines/addons-214441/id_rsa Username:docker}
	I0929 11:31:17.569229  595895 addons.go:435] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0929 11:31:17.569253  595895 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (498149 bytes)
	I0929 11:31:17.569273  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHHostname
	I0929 11:31:17.569959  595895 main.go:141] libmachine: (addons-214441) Calling .DriverName
	I0929 11:31:17.570047  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:31:17.572257  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:31:17.572409  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:31:17.572423  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:31:17.573470  595895 main.go:141] libmachine: (addons-214441) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:9c:d8", ip: ""} in network mk-addons-214441: {Iface:virbr1 ExpiryTime:2025-09-29 12:30:43 +0000 UTC Type:0 Mac:52:54:00:98:9c:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:addons-214441 Clientid:01:52:54:00:98:9c:d8}
	I0929 11:31:17.573495  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined IP address 192.168.39.76 and MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:31:17.573534  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42077
	I0929 11:31:17.574161  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:31:17.574166  595895 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I0929 11:31:17.574420  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:31:17.574975  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:31:17.575036  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:31:17.575329  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHPort
	I0929 11:31:17.575415  595895 main.go:141] libmachine: (addons-214441) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:9c:d8", ip: ""} in network mk-addons-214441: {Iface:virbr1 ExpiryTime:2025-09-29 12:30:43 +0000 UTC Type:0 Mac:52:54:00:98:9c:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:addons-214441 Clientid:01:52:54:00:98:9c:d8}
	I0929 11:31:17.575430  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined IP address 192.168.39.76 and MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:31:17.575671  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHKeyPath
	I0929 11:31:17.575865  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHUsername
	I0929 11:31:17.576099  595895 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21654-591397/.minikube/machines/addons-214441/id_rsa Username:docker}
	I0929 11:31:17.577061  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHPort
	I0929 11:31:17.577247  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHKeyPath
	I0929 11:31:17.577378  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHUsername
	I0929 11:31:17.577535  595895 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21654-591397/.minikube/machines/addons-214441/id_rsa Username:docker}
	I0929 11:31:17.577554  595895 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0929 11:31:17.577582  595895 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0929 11:31:17.577605  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHHostname
	I0929 11:31:17.579736  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40239
	I0929 11:31:17.580597  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:31:17.581383  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:31:17.581446  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:31:17.582289  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:31:17.582694  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44947
	I0929 11:31:17.582952  595895 main.go:141] libmachine: (addons-214441) Calling .DriverName
	I0929 11:31:17.583853  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:31:17.585630  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46433
	I0929 11:31:17.585637  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38605
	I0929 11:31:17.586733  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:31:17.586755  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:31:17.586846  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:31:17.587240  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:31:17.587458  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:31:17.587548  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:31:17.587503  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37799
	I0929 11:31:17.588342  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:31:17.588817  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:31:17.588838  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:31:17.589534  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35991
	I0929 11:31:17.589680  595895 main.go:141] libmachine: (addons-214441) Calling .GetState
	I0929 11:31:17.589727  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:31:17.589953  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:31:17.590461  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:31:17.590684  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:31:17.590701  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:31:17.590814  595895 main.go:141] libmachine: (addons-214441) Calling .GetState
	I0929 11:31:17.590864  595895 main.go:141] libmachine: (addons-214441) Calling .GetState
	I0929 11:31:17.591866  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:31:17.592243  595895 main.go:141] libmachine: (addons-214441) Calling .GetState
	I0929 11:31:17.592985  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:31:17.593774  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:31:17.593791  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:31:17.594759  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:31:17.595210  595895 main.go:141] libmachine: (addons-214441) Calling .GetState
	I0929 11:31:17.595390  595895 main.go:141] libmachine: (addons-214441) Calling .DriverName
	I0929 11:31:17.596824  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:31:17.597871  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:31:17.598227  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:31:17.598762  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35305
	I0929 11:31:17.599344  595895 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0929 11:31:17.600871  595895 main.go:141] libmachine: (addons-214441) Calling .DriverName
	I0929 11:31:17.600871  595895 main.go:141] libmachine: (addons-214441) Calling .DriverName
	I0929 11:31:17.600871  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:31:17.600928  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:31:17.600961  595895 main.go:141] libmachine: (addons-214441) Calling .DriverName
	I0929 11:31:17.600994  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39573
	I0929 11:31:17.601002  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43843
	I0929 11:31:17.601641  595895 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0929 11:31:17.601827  595895 main.go:141] libmachine: (addons-214441) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:9c:d8", ip: ""} in network mk-addons-214441: {Iface:virbr1 ExpiryTime:2025-09-29 12:30:43 +0000 UTC Type:0 Mac:52:54:00:98:9c:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:addons-214441 Clientid:01:52:54:00:98:9c:d8}
	I0929 11:31:17.601850  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined IP address 192.168.39.76 and MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:31:17.601913  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:31:17.602052  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:31:17.602151  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39993
	I0929 11:31:17.602155  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:31:17.602306  595895 main.go:141] libmachine: (addons-214441) Calling .GetState
	I0929 11:31:17.602590  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:31:17.602610  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:31:17.602811  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHPort
	I0929 11:31:17.602977  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHKeyPath
	I0929 11:31:17.603038  595895 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0929 11:31:17.603089  595895 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0929 11:31:17.603260  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHUsername
	I0929 11:31:17.603328  595895 main.go:141] libmachine: (addons-214441) Calling .DriverName
	I0929 11:31:17.603564  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:31:17.603593  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:31:17.603752  595895 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21654-591397/.minikube/machines/addons-214441/id_rsa Username:docker}
	I0929 11:31:17.604258  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:31:17.604320  595895 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I0929 11:31:17.604825  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:31:17.604525  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:31:17.605686  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:31:17.605694  595895 main.go:141] libmachine: (addons-214441) Calling .DriverName
	I0929 11:31:17.604846  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:31:17.604946  595895 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0929 11:31:17.605125  595895 main.go:141] libmachine: (addons-214441) Calling .GetState
	I0929 11:31:17.606062  595895 main.go:141] libmachine: (addons-214441) Calling .GetState
	I0929 11:31:17.606154  595895 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.44.1
	I0929 11:31:17.606169  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:31:17.606174  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:31:17.607283  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:31:17.607459  595895 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0929 11:31:17.607513  595895 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0929 11:31:17.608000  595895 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I0929 11:31:17.608022  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHHostname
	I0929 11:31:17.607722  595895 main.go:141] libmachine: (addons-214441) Calling .GetState
	I0929 11:31:17.607825  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:31:17.608327  595895 out.go:179]   - Using image docker.io/busybox:stable
	I0929 11:31:17.608504  595895 main.go:141] libmachine: (addons-214441) Calling .GetState
	I0929 11:31:17.609208  595895 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0929 11:31:17.609380  595895 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I0929 11:31:17.609617  595895 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I0929 11:31:17.609695  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHHostname
	I0929 11:31:17.609885  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35811
	I0929 11:31:17.610214  595895 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0929 11:31:17.610480  595895 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0929 11:31:17.610442  595895 main.go:141] libmachine: (addons-214441) Calling .DriverName
	I0929 11:31:17.610634  595895 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 11:31:17.610651  595895 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0929 11:31:17.610666  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHHostname
	I0929 11:31:17.610637  595895 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0929 11:31:17.610551  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHHostname
	I0929 11:31:17.611056  595895 main.go:141] libmachine: (addons-214441) Calling .DriverName
	I0929 11:31:17.611127  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:31:17.611242  595895 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.2
	I0929 11:31:17.612177  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:31:17.612200  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:31:17.612367  595895 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I0929 11:31:17.612539  595895 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.41
	I0929 11:31:17.612558  595895 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0929 11:31:17.612574  595895 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0929 11:31:17.612702  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHHostname
	I0929 11:31:17.612652  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:31:17.613066  595895 main.go:141] libmachine: (addons-214441) Calling .DriverName
	I0929 11:31:17.613132  595895 main.go:141] libmachine: (addons-214441) Calling .GetState
	I0929 11:31:17.613978  595895 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I0929 11:31:17.614058  595895 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0929 11:31:17.614157  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHHostname
	I0929 11:31:17.614015  595895 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I0929 11:31:17.614286  595895 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I0929 11:31:17.614314  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHHostname
	I0929 11:31:17.614339  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40205
	I0929 11:31:17.614532  595895 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0929 11:31:17.614774  595895 main.go:141] libmachine: (addons-214441) Calling .DriverName
	I0929 11:31:17.614918  595895 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0929 11:31:17.615384  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:31:17.615994  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:31:17.616036  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:31:17.616065  595895 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I0929 11:31:17.616139  595895 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0929 11:31:17.616150  595895 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0929 11:31:17.616217  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHHostname
	I0929 11:31:17.616451  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:31:17.616766  595895 main.go:141] libmachine: (addons-214441) Calling .GetState
	I0929 11:31:17.617254  595895 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0929 11:31:17.618390  595895 main.go:141] libmachine: (addons-214441) Calling .DriverName
	I0929 11:31:17.618595  595895 out.go:179]   - Using image docker.io/registry:3.0.0
	I0929 11:31:17.619658  595895 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I0929 11:31:17.619715  595895 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I0929 11:31:17.619728  595895 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0929 11:31:17.619752  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHHostname
	I0929 11:31:17.619788  595895 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0929 11:31:17.620191  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:31:17.620909  595895 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0929 11:31:17.620926  595895 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I0929 11:31:17.621015  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHHostname
	I0929 11:31:17.621216  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:31:17.622235  595895 main.go:141] libmachine: (addons-214441) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:9c:d8", ip: ""} in network mk-addons-214441: {Iface:virbr1 ExpiryTime:2025-09-29 12:30:43 +0000 UTC Type:0 Mac:52:54:00:98:9c:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:addons-214441 Clientid:01:52:54:00:98:9c:d8}
	I0929 11:31:17.622260  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined IP address 192.168.39.76 and MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:31:17.622296  595895 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0929 11:31:17.622987  595895 main.go:141] libmachine: (addons-214441) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:9c:d8", ip: ""} in network mk-addons-214441: {Iface:virbr1 ExpiryTime:2025-09-29 12:30:43 +0000 UTC Type:0 Mac:52:54:00:98:9c:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:addons-214441 Clientid:01:52:54:00:98:9c:d8}
	I0929 11:31:17.623010  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined IP address 192.168.39.76 and MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:31:17.623146  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHPort
	I0929 11:31:17.623384  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHKeyPath
	I0929 11:31:17.623851  595895 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0929 11:31:17.623870  595895 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0929 11:31:17.623891  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHHostname
	I0929 11:31:17.623910  595895 main.go:141] libmachine: (addons-214441) Calling .DriverName
	I0929 11:31:17.623977  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHUsername
	I0929 11:31:17.623991  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:31:17.624284  595895 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0929 11:31:17.624300  595895 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0929 11:31:17.624317  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHHostname
	I0929 11:31:17.624324  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHPort
	I0929 11:31:17.624330  595895 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21654-591397/.minikube/machines/addons-214441/id_rsa Username:docker}
	I0929 11:31:17.624655  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:31:17.624690  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHKeyPath
	I0929 11:31:17.625088  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHUsername
	I0929 11:31:17.625297  595895 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21654-591397/.minikube/machines/addons-214441/id_rsa Username:docker}
	I0929 11:31:17.626099  595895 main.go:141] libmachine: (addons-214441) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:9c:d8", ip: ""} in network mk-addons-214441: {Iface:virbr1 ExpiryTime:2025-09-29 12:30:43 +0000 UTC Type:0 Mac:52:54:00:98:9c:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:addons-214441 Clientid:01:52:54:00:98:9c:d8}
	I0929 11:31:17.626182  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined IP address 192.168.39.76 and MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:31:17.626247  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:31:17.626251  595895 main.go:141] libmachine: (addons-214441) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:9c:d8", ip: ""} in network mk-addons-214441: {Iface:virbr1 ExpiryTime:2025-09-29 12:30:43 +0000 UTC Type:0 Mac:52:54:00:98:9c:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:addons-214441 Clientid:01:52:54:00:98:9c:d8}
	I0929 11:31:17.626597  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined IP address 192.168.39.76 and MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:31:17.626789  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:31:17.626890  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHPort
	I0929 11:31:17.627091  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHPort
	I0929 11:31:17.627238  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHKeyPath
	I0929 11:31:17.627284  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHKeyPath
	I0929 11:31:17.627374  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHUsername
	I0929 11:31:17.627541  595895 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21654-591397/.minikube/machines/addons-214441/id_rsa Username:docker}
	I0929 11:31:17.627907  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:31:17.627938  595895 main.go:141] libmachine: (addons-214441) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:9c:d8", ip: ""} in network mk-addons-214441: {Iface:virbr1 ExpiryTime:2025-09-29 12:30:43 +0000 UTC Type:0 Mac:52:54:00:98:9c:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:addons-214441 Clientid:01:52:54:00:98:9c:d8}
	I0929 11:31:17.627949  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined IP address 192.168.39.76 and MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:31:17.627979  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHUsername
	I0929 11:31:17.628066  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:31:17.628081  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHPort
	I0929 11:31:17.628268  595895 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21654-591397/.minikube/machines/addons-214441/id_rsa Username:docker}
	I0929 11:31:17.628308  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHKeyPath
	I0929 11:31:17.628533  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHUsername
	I0929 11:31:17.628572  595895 main.go:141] libmachine: (addons-214441) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:9c:d8", ip: ""} in network mk-addons-214441: {Iface:virbr1 ExpiryTime:2025-09-29 12:30:43 +0000 UTC Type:0 Mac:52:54:00:98:9c:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:addons-214441 Clientid:01:52:54:00:98:9c:d8}
	I0929 11:31:17.628735  595895 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21654-591397/.minikube/machines/addons-214441/id_rsa Username:docker}
	I0929 11:31:17.628848  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined IP address 192.168.39.76 and MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:31:17.629214  595895 main.go:141] libmachine: (addons-214441) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:9c:d8", ip: ""} in network mk-addons-214441: {Iface:virbr1 ExpiryTime:2025-09-29 12:30:43 +0000 UTC Type:0 Mac:52:54:00:98:9c:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:addons-214441 Clientid:01:52:54:00:98:9c:d8}
	I0929 11:31:17.629249  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined IP address 192.168.39.76 and MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:31:17.629266  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHPort
	I0929 11:31:17.629512  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHKeyPath
	I0929 11:31:17.629592  595895 main.go:141] libmachine: (addons-214441) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:9c:d8", ip: ""} in network mk-addons-214441: {Iface:virbr1 ExpiryTime:2025-09-29 12:30:43 +0000 UTC Type:0 Mac:52:54:00:98:9c:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:addons-214441 Clientid:01:52:54:00:98:9c:d8}
	I0929 11:31:17.629606  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined IP address 192.168.39.76 and MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:31:17.629764  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHUsername
	I0929 11:31:17.629861  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHPort
	I0929 11:31:17.630008  595895 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21654-591397/.minikube/machines/addons-214441/id_rsa Username:docker}
	I0929 11:31:17.630062  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHKeyPath
	I0929 11:31:17.630142  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHPort
	I0929 11:31:17.630197  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHUsername
	I0929 11:31:17.630311  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHKeyPath
	I0929 11:31:17.630370  595895 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21654-591397/.minikube/machines/addons-214441/id_rsa Username:docker}
	I0929 11:31:17.630910  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHUsername
	I0929 11:31:17.631305  595895 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21654-591397/.minikube/machines/addons-214441/id_rsa Username:docker}
	I0929 11:31:17.631821  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:31:17.632272  595895 main.go:141] libmachine: (addons-214441) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:9c:d8", ip: ""} in network mk-addons-214441: {Iface:virbr1 ExpiryTime:2025-09-29 12:30:43 +0000 UTC Type:0 Mac:52:54:00:98:9c:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:addons-214441 Clientid:01:52:54:00:98:9c:d8}
	I0929 11:31:17.632296  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined IP address 192.168.39.76 and MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:31:17.632442  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:31:17.632503  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHPort
	I0929 11:31:17.632710  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:31:17.632789  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHKeyPath
	I0929 11:31:17.633084  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHUsername
	I0929 11:31:17.633162  595895 main.go:141] libmachine: (addons-214441) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:9c:d8", ip: ""} in network mk-addons-214441: {Iface:virbr1 ExpiryTime:2025-09-29 12:30:43 +0000 UTC Type:0 Mac:52:54:00:98:9c:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:addons-214441 Clientid:01:52:54:00:98:9c:d8}
	I0929 11:31:17.633176  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined IP address 192.168.39.76 and MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:31:17.633207  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:31:17.633228  595895 main.go:141] libmachine: (addons-214441) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:9c:d8", ip: ""} in network mk-addons-214441: {Iface:virbr1 ExpiryTime:2025-09-29 12:30:43 +0000 UTC Type:0 Mac:52:54:00:98:9c:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:addons-214441 Clientid:01:52:54:00:98:9c:d8}
	I0929 11:31:17.633242  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined IP address 192.168.39.76 and MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:31:17.633391  595895 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21654-591397/.minikube/machines/addons-214441/id_rsa Username:docker}
	I0929 11:31:17.633435  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHPort
	I0929 11:31:17.633557  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHPort
	I0929 11:31:17.633619  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHKeyPath
	I0929 11:31:17.633759  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHKeyPath
	I0929 11:31:17.633793  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHUsername
	I0929 11:31:17.634042  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHUsername
	I0929 11:31:17.634042  595895 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21654-591397/.minikube/machines/addons-214441/id_rsa Username:docker}
	I0929 11:31:17.634131  595895 main.go:141] libmachine: (addons-214441) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:9c:d8", ip: ""} in network mk-addons-214441: {Iface:virbr1 ExpiryTime:2025-09-29 12:30:43 +0000 UTC Type:0 Mac:52:54:00:98:9c:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:addons-214441 Clientid:01:52:54:00:98:9c:d8}
	I0929 11:31:17.634164  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined IP address 192.168.39.76 and MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:31:17.634219  595895 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21654-591397/.minikube/machines/addons-214441/id_rsa Username:docker}
	I0929 11:31:17.634716  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHPort
	I0929 11:31:17.634894  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHKeyPath
	I0929 11:31:17.635088  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHUsername
	I0929 11:31:17.635265  595895 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21654-591397/.minikube/machines/addons-214441/id_rsa Username:docker}
	W0929 11:31:17.919750  595895 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:44698->192.168.39.76:22: read: connection reset by peer
	I0929 11:31:17.919798  595895 retry.go:31] will retry after 127.603101ms: ssh: handshake failed: read tcp 192.168.39.1:44698->192.168.39.76:22: read: connection reset by peer
	W0929 11:31:17.927998  595895 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:44716->192.168.39.76:22: read: connection reset by peer
	I0929 11:31:17.928034  595895 retry.go:31] will retry after 352.316454ms: ssh: handshake failed: read tcp 192.168.39.1:44716->192.168.39.76:22: read: connection reset by peer
	I0929 11:31:18.834850  595895 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0929 11:31:18.834892  595895 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0929 11:31:18.867206  595895 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0929 11:31:18.867237  595895 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0929 11:31:18.998018  595895 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0929 11:31:19.019969  595895 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (1.57851512s)
	I0929 11:31:19.019988  595895 ssh_runner.go:235] Completed: sudo systemctl daemon-reload: (1.56567428s)
	I0929 11:31:19.020058  595895 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0929 11:31:19.020195  595895 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0929 11:31:19.047383  595895 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0929 11:31:19.178551  595895 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0929 11:31:19.194460  595895 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0929 11:31:19.203493  595895 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 11:31:19.224634  595895 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0929 11:31:19.236908  595895 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I0929 11:31:19.236937  595895 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I0929 11:31:19.339094  595895 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0929 11:31:19.470368  595895 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0929 11:31:19.470407  595895 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0929 11:31:19.482955  595895 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0929 11:31:19.507279  595895 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0929 11:31:19.533452  595895 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0929 11:31:19.533481  595895 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0929 11:31:19.580275  595895 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0929 11:31:19.580310  595895 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0929 11:31:19.612191  595895 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0929 11:31:19.612228  595895 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0929 11:31:19.656222  595895 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I0929 11:31:19.656250  595895 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0929 11:31:19.707608  595895 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I0929 11:31:19.720943  595895 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 11:31:19.949642  595895 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0929 11:31:19.949675  595895 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0929 11:31:20.010236  595895 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0929 11:31:20.010269  595895 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0929 11:31:20.143152  595895 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0929 11:31:20.143179  595895 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0929 11:31:20.164194  595895 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0929 11:31:20.164223  595895 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0929 11:31:20.178619  595895 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0929 11:31:20.178652  595895 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0929 11:31:20.352326  595895 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0929 11:31:20.352354  595895 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0929 11:31:20.399905  595895 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0929 11:31:20.399935  595895 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0929 11:31:20.528800  595895 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0929 11:31:20.554026  595895 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0929 11:31:20.608085  595895 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0929 11:31:20.608132  595895 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0929 11:31:20.855879  595895 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0929 11:31:20.901072  595895 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0929 11:31:20.901124  595895 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0929 11:31:21.046874  595895 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0929 11:31:21.046903  595895 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0929 11:31:21.279957  595895 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0929 11:31:21.279985  595895 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0929 11:31:21.494633  595895 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0929 11:31:21.494662  595895 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0929 11:31:21.896279  595895 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0929 11:31:22.355612  595895 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0929 11:31:22.355644  595895 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0929 11:31:23.136046  595895 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0929 11:31:23.136083  595895 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0929 11:31:23.742895  595895 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0929 11:31:23.742921  595895 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0929 11:31:24.397559  595895 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0929 11:31:24.397588  595895 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0929 11:31:24.806696  595895 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0929 11:31:24.806729  595895 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0929 11:31:25.028630  595895 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0929 11:31:25.028675  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHHostname
	I0929 11:31:25.032868  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:31:25.033494  595895 main.go:141] libmachine: (addons-214441) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:9c:d8", ip: ""} in network mk-addons-214441: {Iface:virbr1 ExpiryTime:2025-09-29 12:30:43 +0000 UTC Type:0 Mac:52:54:00:98:9c:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:addons-214441 Clientid:01:52:54:00:98:9c:d8}
	I0929 11:31:25.033526  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined IP address 192.168.39.76 and MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:31:25.033760  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHPort
	I0929 11:31:25.034027  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHKeyPath
	I0929 11:31:25.034259  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHUsername
	I0929 11:31:25.034422  595895 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21654-591397/.minikube/machines/addons-214441/id_rsa Username:docker}
	I0929 11:31:25.610330  595895 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0929 11:31:25.954809  595895 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0929 11:31:26.260607  595895 addons.go:238] Setting addon gcp-auth=true in "addons-214441"
	I0929 11:31:26.260695  595895 host.go:66] Checking if "addons-214441" exists ...
	I0929 11:31:26.261024  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:31:26.261068  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:31:26.276135  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36421
	I0929 11:31:26.276726  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:31:26.277323  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:31:26.277354  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:31:26.277924  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:31:26.278456  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:31:26.278490  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:31:26.293277  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40243
	I0929 11:31:26.293786  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:31:26.294319  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:31:26.294344  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:31:26.294858  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:31:26.295136  595895 main.go:141] libmachine: (addons-214441) Calling .GetState
	I0929 11:31:26.297279  595895 main.go:141] libmachine: (addons-214441) Calling .DriverName
	I0929 11:31:26.297583  595895 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0929 11:31:26.297612  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHHostname
	I0929 11:31:26.301409  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:31:26.302065  595895 main.go:141] libmachine: (addons-214441) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:9c:d8", ip: ""} in network mk-addons-214441: {Iface:virbr1 ExpiryTime:2025-09-29 12:30:43 +0000 UTC Type:0 Mac:52:54:00:98:9c:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:addons-214441 Clientid:01:52:54:00:98:9c:d8}
	I0929 11:31:26.302093  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined IP address 192.168.39.76 and MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:31:26.302272  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHPort
	I0929 11:31:26.302474  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHKeyPath
	I0929 11:31:26.302636  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHUsername
	I0929 11:31:26.302830  595895 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21654-591397/.minikube/machines/addons-214441/id_rsa Username:docker}
	I0929 11:31:26.648618  595895 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.65053686s)
	I0929 11:31:26.648643  595895 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (7.628556534s)
	I0929 11:31:26.648693  595895 main.go:141] libmachine: Making call to close driver server
	I0929 11:31:26.648703  595895 main.go:141] libmachine: (addons-214441) Calling .Close
	I0929 11:31:26.648707  595895 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (7.628486823s)
	I0929 11:31:26.648740  595895 start.go:976] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0929 11:31:26.648855  595895 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (7.601423652s)
	I0929 11:31:26.648889  595895 main.go:141] libmachine: Making call to close driver server
	I0929 11:31:26.648898  595895 main.go:141] libmachine: (addons-214441) Calling .Close
	I0929 11:31:26.649041  595895 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:31:26.649056  595895 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:31:26.649066  595895 main.go:141] libmachine: Making call to close driver server
	I0929 11:31:26.649073  595895 main.go:141] libmachine: (addons-214441) Calling .Close
	I0929 11:31:26.649181  595895 main.go:141] libmachine: (addons-214441) DBG | Closing plugin on server side
	I0929 11:31:26.649214  595895 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:31:26.649225  595895 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:31:26.649256  595895 main.go:141] libmachine: Making call to close driver server
	I0929 11:31:26.649265  595895 main.go:141] libmachine: (addons-214441) Calling .Close
	I0929 11:31:26.649555  595895 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:31:26.649585  595895 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:31:26.649698  595895 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:31:26.649728  595895 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:31:26.649741  595895 node_ready.go:35] waiting up to 6m0s for node "addons-214441" to be "Ready" ...
	I0929 11:31:26.649625  595895 main.go:141] libmachine: (addons-214441) DBG | Closing plugin on server side
	I0929 11:31:26.649665  595895 main.go:141] libmachine: (addons-214441) DBG | Closing plugin on server side
	I0929 11:31:26.797678  595895 node_ready.go:49] node "addons-214441" is "Ready"
	I0929 11:31:26.797712  595895 node_ready.go:38] duration metric: took 147.94134ms for node "addons-214441" to be "Ready" ...
	I0929 11:31:26.797735  595895 api_server.go:52] waiting for apiserver process to appear ...
	I0929 11:31:26.797797  595895 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0929 11:31:27.078868  595895 main.go:141] libmachine: Making call to close driver server
	I0929 11:31:27.078896  595895 main.go:141] libmachine: (addons-214441) Calling .Close
	I0929 11:31:27.079284  595895 main.go:141] libmachine: (addons-214441) DBG | Closing plugin on server side
	I0929 11:31:27.079351  595895 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:31:27.079372  595895 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:31:27.220384  595895 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-214441" context rescaled to 1 replicas
	I0929 11:31:30.522194  595895 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (11.34358993s)
	I0929 11:31:30.522263  595895 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (11.327765304s)
	I0929 11:31:30.522284  595895 main.go:141] libmachine: Making call to close driver server
	I0929 11:31:30.522297  595895 main.go:141] libmachine: (addons-214441) Calling .Close
	I0929 11:31:30.522297  595895 main.go:141] libmachine: Making call to close driver server
	I0929 11:31:30.522308  595895 main.go:141] libmachine: (addons-214441) Calling .Close
	I0929 11:31:30.522336  595895 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (11.318803941s)
	I0929 11:31:30.522386  595895 main.go:141] libmachine: Making call to close driver server
	I0929 11:31:30.522398  595895 main.go:141] libmachine: (addons-214441) Calling .Close
	I0929 11:31:30.522641  595895 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:31:30.522658  595895 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:31:30.522685  595895 main.go:141] libmachine: Making call to close driver server
	I0929 11:31:30.522695  595895 main.go:141] libmachine: (addons-214441) Calling .Close
	I0929 11:31:30.522794  595895 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:31:30.522804  595895 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:31:30.522813  595895 main.go:141] libmachine: Making call to close driver server
	I0929 11:31:30.522819  595895 main.go:141] libmachine: (addons-214441) Calling .Close
	I0929 11:31:30.522874  595895 main.go:141] libmachine: (addons-214441) DBG | Closing plugin on server side
	I0929 11:31:30.522863  595895 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:31:30.522905  595895 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:31:30.522914  595895 main.go:141] libmachine: Making call to close driver server
	I0929 11:31:30.522922  595895 main.go:141] libmachine: (addons-214441) Calling .Close
	I0929 11:31:30.522952  595895 main.go:141] libmachine: (addons-214441) DBG | Closing plugin on server side
	I0929 11:31:30.522984  595895 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:31:30.522990  595895 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:31:30.523183  595895 main.go:141] libmachine: (addons-214441) DBG | Closing plugin on server side
	I0929 11:31:30.523188  595895 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:31:30.523205  595895 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:31:30.523212  595895 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:31:30.523216  595895 addons.go:479] Verifying addon ingress=true in "addons-214441"
	I0929 11:31:30.523222  595895 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:31:30.527182  595895 out.go:179] * Verifying ingress addon...
	I0929 11:31:30.529738  595895 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0929 11:31:30.708830  595895 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0929 11:31:30.708859  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:31.235125  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:31.629964  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:32.068126  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:32.586294  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:33.055440  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:33.661344  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:33.865322  595895 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (14.640641229s)
	I0929 11:31:33.865361  595895 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (14.526214451s)
	I0929 11:31:33.865396  595895 main.go:141] libmachine: Making call to close driver server
	I0929 11:31:33.865407  595895 main.go:141] libmachine: Making call to close driver server
	I0929 11:31:33.865413  595895 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (14.382417731s)
	I0929 11:31:33.865425  595895 main.go:141] libmachine: (addons-214441) Calling .Close
	I0929 11:31:33.865448  595895 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (14.358144157s)
	I0929 11:31:33.865456  595895 main.go:141] libmachine: Making call to close driver server
	I0929 11:31:33.865470  595895 main.go:141] libmachine: (addons-214441) Calling .Close
	I0929 11:31:33.865472  595895 main.go:141] libmachine: Making call to close driver server
	I0929 11:31:33.865527  595895 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (14.157883934s)
	I0929 11:31:33.865528  595895 main.go:141] libmachine: (addons-214441) Calling .Close
	I0929 11:31:33.865545  595895 main.go:141] libmachine: Making call to close driver server
	I0929 11:31:33.865554  595895 main.go:141] libmachine: (addons-214441) Calling .Close
	I0929 11:31:33.865410  595895 main.go:141] libmachine: (addons-214441) Calling .Close
	I0929 11:31:33.865659  595895 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (14.144676501s)
	W0929 11:31:33.865707  595895 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:31:33.865740  595895 retry.go:31] will retry after 127.952259ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:31:33.865790  595895 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (13.336965067s)
	I0929 11:31:33.865796  595895 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:31:33.865807  595895 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:31:33.865810  595895 main.go:141] libmachine: Making call to close driver server
	I0929 11:31:33.865818  595895 main.go:141] libmachine: Making call to close driver server
	I0929 11:31:33.865821  595895 main.go:141] libmachine: (addons-214441) Calling .Close
	I0929 11:31:33.865826  595895 main.go:141] libmachine: (addons-214441) Calling .Close
	I0929 11:31:33.865864  595895 main.go:141] libmachine: (addons-214441) DBG | Closing plugin on server side
	I0929 11:31:33.865883  595895 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:31:33.865895  595895 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:31:33.865895  595895 main.go:141] libmachine: (addons-214441) DBG | Closing plugin on server side
	I0929 11:31:33.865906  595895 main.go:141] libmachine: Making call to close driver server
	I0929 11:31:33.865922  595895 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:31:33.865928  595895 main.go:141] libmachine: (addons-214441) Calling .Close
	I0929 11:31:33.865931  595895 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:31:33.865939  595895 main.go:141] libmachine: Making call to close driver server
	I0929 11:31:33.865945  595895 main.go:141] libmachine: (addons-214441) Calling .Close
	I0929 11:31:33.865960  595895 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (13.311901558s)
	I0929 11:31:33.865978  595895 main.go:141] libmachine: Making call to close driver server
	I0929 11:31:33.865986  595895 main.go:141] libmachine: (addons-214441) Calling .Close
	I0929 11:31:33.866077  595895 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (13.010152282s)
	I0929 11:31:33.866096  595895 main.go:141] libmachine: Making call to close driver server
	I0929 11:31:33.866124  595895 main.go:141] libmachine: (addons-214441) Calling .Close
	I0929 11:31:33.866162  595895 main.go:141] libmachine: (addons-214441) DBG | Closing plugin on server side
	I0929 11:31:33.866187  595895 main.go:141] libmachine: (addons-214441) DBG | Closing plugin on server side
	I0929 11:31:33.866214  595895 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:31:33.866223  595895 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:31:33.866230  595895 main.go:141] libmachine: Making call to close driver server
	I0929 11:31:33.866237  595895 main.go:141] libmachine: (addons-214441) Calling .Close
	I0929 11:31:33.866283  595895 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (11.969964695s)
	W0929 11:31:33.866347  595895 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	Warning: unrecognized format "int64"
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0929 11:31:33.866370  595895 retry.go:31] will retry after 213.926415ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	Warning: unrecognized format "int64"
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0929 11:31:33.866587  595895 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:31:33.866618  595895 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:31:33.866622  595895 main.go:141] libmachine: (addons-214441) DBG | Closing plugin on server side
	I0929 11:31:33.866627  595895 main.go:141] libmachine: Making call to close driver server
	I0929 11:31:33.866630  595895 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:31:33.866636  595895 main.go:141] libmachine: (addons-214441) Calling .Close
	I0929 11:31:33.866640  595895 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:31:33.866651  595895 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:31:33.866662  595895 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:31:33.866606  595895 main.go:141] libmachine: (addons-214441) DBG | Closing plugin on server side
	I0929 11:31:33.866736  595895 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:31:33.866752  595895 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:31:33.866766  595895 main.go:141] libmachine: Making call to close driver server
	I0929 11:31:33.866780  595895 main.go:141] libmachine: (addons-214441) Calling .Close
	I0929 11:31:33.866875  595895 main.go:141] libmachine: (addons-214441) DBG | Closing plugin on server side
	I0929 11:31:33.866910  595895 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:31:33.866925  595895 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:31:33.867202  595895 main.go:141] libmachine: (addons-214441) DBG | Closing plugin on server side
	I0929 11:31:33.867264  595895 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:31:33.867284  595895 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:31:33.867303  595895 main.go:141] libmachine: Making call to close driver server
	I0929 11:31:33.867339  595895 main.go:141] libmachine: (addons-214441) Calling .Close
	I0929 11:31:33.867618  595895 main.go:141] libmachine: (addons-214441) DBG | Closing plugin on server side
	I0929 11:31:33.867761  595895 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:31:33.867769  595895 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:31:33.867778  595895 addons.go:479] Verifying addon registry=true in "addons-214441"
	I0929 11:31:33.868269  595895 main.go:141] libmachine: (addons-214441) DBG | Closing plugin on server side
	I0929 11:31:33.868300  595895 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:31:33.868305  595895 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:31:33.868451  595895 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:31:33.868463  595895 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:31:33.868472  595895 main.go:141] libmachine: Making call to close driver server
	I0929 11:31:33.868479  595895 main.go:141] libmachine: (addons-214441) Calling .Close
	I0929 11:31:33.869037  595895 main.go:141] libmachine: (addons-214441) DBG | Closing plugin on server side
	I0929 11:31:33.869070  595895 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:31:33.869076  595895 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:31:33.869084  595895 addons.go:479] Verifying addon metrics-server=true in "addons-214441"
	I0929 11:31:33.869798  595895 main.go:141] libmachine: (addons-214441) DBG | Closing plugin on server side
	I0929 11:31:33.869839  595895 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:31:33.869847  595895 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:31:33.869975  595895 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:31:33.870031  595895 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:31:33.871564  595895 out.go:179] * Verifying registry addon...
	I0929 11:31:33.872479  595895 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-214441 service yakd-dashboard -n yakd-dashboard
	
	I0929 11:31:33.874294  595895 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0929 11:31:33.993863  595895 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0929 11:31:33.993900  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:33.994009  595895 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 11:31:34.081538  595895 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0929 11:31:34.115447  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:34.146570  595895 main.go:141] libmachine: Making call to close driver server
	I0929 11:31:34.146609  595895 main.go:141] libmachine: (addons-214441) Calling .Close
	I0929 11:31:34.146947  595895 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:31:34.146967  595895 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:31:34.413578  595895 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (8.803181451s)
	I0929 11:31:34.413616  595895 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (8.116003731s)
	I0929 11:31:34.413656  595895 main.go:141] libmachine: Making call to close driver server
	I0929 11:31:34.413669  595895 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (7.615843233s)
	I0929 11:31:34.413709  595895 api_server.go:72] duration metric: took 16.972266985s to wait for apiserver process to appear ...
	I0929 11:31:34.413722  595895 api_server.go:88] waiting for apiserver healthz status ...
	I0929 11:31:34.413750  595895 api_server.go:253] Checking apiserver healthz at https://192.168.39.76:8443/healthz ...
	I0929 11:31:34.413675  595895 main.go:141] libmachine: (addons-214441) Calling .Close
	I0929 11:31:34.414213  595895 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:31:34.414230  595895 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:31:34.414254  595895 main.go:141] libmachine: Making call to close driver server
	I0929 11:31:34.414261  595895 main.go:141] libmachine: (addons-214441) Calling .Close
	I0929 11:31:34.414511  595895 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:31:34.414529  595895 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:31:34.414543  595895 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-214441"
	I0929 11:31:34.415286  595895 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I0929 11:31:34.416180  595895 out.go:179] * Verifying csi-hostpath-driver addon...
	I0929 11:31:34.417833  595895 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0929 11:31:34.418933  595895 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0929 11:31:34.419343  595895 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0929 11:31:34.419365  595895 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0929 11:31:34.428017  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:34.435805  595895 api_server.go:279] https://192.168.39.76:8443/healthz returned 200:
	ok
	I0929 11:31:34.443092  595895 api_server.go:141] control plane version: v1.34.0
	I0929 11:31:34.443139  595895 api_server.go:131] duration metric: took 29.409177ms to wait for apiserver health ...
	I0929 11:31:34.443150  595895 system_pods.go:43] waiting for kube-system pods to appear ...
	I0929 11:31:34.495447  595895 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0929 11:31:34.495473  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:34.527406  595895 system_pods.go:59] 20 kube-system pods found
	I0929 11:31:34.527452  595895 system_pods.go:61] "amd-gpu-device-plugin-7jx7f" [97d8a167-36be-44b5-b99c-2b55db99df3e] Running
	I0929 11:31:34.527458  595895 system_pods.go:61] "coredns-66bc5c9577-fkh52" [bd9cc142-39db-4ed9-9ecc-3d270499136c] Running
	I0929 11:31:34.527463  595895 system_pods.go:61] "coredns-66bc5c9577-sgnb2" [343874b8-6c4d-4e36-8ebf-a379e3a93a98] Running
	I0929 11:31:34.527471  595895 system_pods.go:61] "csi-hostpath-attacher-0" [27d9af25-8d41-4c37-9359-c7bd4f88f09f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0929 11:31:34.527475  595895 system_pods.go:61] "csi-hostpath-resizer-0" [4f24e046-d5b2-41ff-a051-b0a572bf9348] Pending
	I0929 11:31:34.527484  595895 system_pods.go:61] "csi-hostpathplugin-8279f" [3cc6db25-521c-4711-9d36-e22ab6d16249] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0929 11:31:34.527490  595895 system_pods.go:61] "etcd-addons-214441" [1710b264-ccec-4054-8a1c-7d8e5ce49163] Running
	I0929 11:31:34.527494  595895 system_pods.go:61] "kube-apiserver-addons-214441" [302fdd61-6c51-4e6e-a1af-7856ccb9f2ca] Running
	I0929 11:31:34.527502  595895 system_pods.go:61] "kube-controller-manager-addons-214441" [3e9c3a44-d14e-4335-a141-f7c648e6159d] Running
	I0929 11:31:34.527507  595895 system_pods.go:61] "kube-ingress-dns-minikube" [12920e9d-debd-4783-9f79-1c3a345e576e] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0929 11:31:34.527513  595895 system_pods.go:61] "kube-proxy-d9fnb" [229af565-3300-4cb4-8289-f3d9b4a9af81] Running
	I0929 11:31:34.527520  595895 system_pods.go:61] "kube-scheduler-addons-214441" [bcaec156-9259-4787-9d02-01a2203b43ac] Running
	I0929 11:31:34.527524  595895 system_pods.go:61] "metrics-server-85b7d694d7-zlrv7" [003bc9b9-be00-4e08-8af7-3443d0502181] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0929 11:31:34.527533  595895 system_pods.go:61] "nvidia-device-plugin-daemonset-x7b8m" [b96ad59d-d30d-4437-a5c7-ce0c5fc69348] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0929 11:31:34.527541  595895 system_pods.go:61] "registry-66898fdd98-d7zx7" [e0282924-ef3d-48eb-8906-ea10f183b39e] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0929 11:31:34.527547  595895 system_pods.go:61] "registry-creds-764b6fb674-td8pw" [c6001777-2f97-49d8-972b-27d373557795] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0929 11:31:34.527557  595895 system_pods.go:61] "registry-proxy-grb7m" [d79830e7-a640-46a7-9b07-38e39aceac96] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0929 11:31:34.527562  595895 system_pods.go:61] "snapshot-controller-7d9fbc56b8-pw4g9" [74b07f42-da97-4ad5-8fdb-748ab5cfbde2] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0929 11:31:34.527571  595895 system_pods.go:61] "snapshot-controller-7d9fbc56b8-wvh2l" [987a4a45-4d77-43ac-816c-ebc08faf51bc] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0929 11:31:34.527575  595895 system_pods.go:61] "storage-provisioner" [e1a79041-edc7-4f8e-96cb-8b3566893a0e] Running
	I0929 11:31:34.527582  595895 system_pods.go:74] duration metric: took 84.42539ms to wait for pod list to return data ...
	I0929 11:31:34.527594  595895 default_sa.go:34] waiting for default service account to be created ...
	I0929 11:31:34.549252  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:34.556947  595895 default_sa.go:45] found service account: "default"
	I0929 11:31:34.556977  595895 default_sa.go:55] duration metric: took 29.376735ms for default service account to be created ...
	I0929 11:31:34.556988  595895 system_pods.go:116] waiting for k8s-apps to be running ...
	I0929 11:31:34.596290  595895 system_pods.go:86] 20 kube-system pods found
	I0929 11:31:34.596322  595895 system_pods.go:89] "amd-gpu-device-plugin-7jx7f" [97d8a167-36be-44b5-b99c-2b55db99df3e] Running
	I0929 11:31:34.596330  595895 system_pods.go:89] "coredns-66bc5c9577-fkh52" [bd9cc142-39db-4ed9-9ecc-3d270499136c] Running
	I0929 11:31:34.596334  595895 system_pods.go:89] "coredns-66bc5c9577-sgnb2" [343874b8-6c4d-4e36-8ebf-a379e3a93a98] Running
	I0929 11:31:34.596343  595895 system_pods.go:89] "csi-hostpath-attacher-0" [27d9af25-8d41-4c37-9359-c7bd4f88f09f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0929 11:31:34.596349  595895 system_pods.go:89] "csi-hostpath-resizer-0" [4f24e046-d5b2-41ff-a051-b0a572bf9348] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0929 11:31:34.596357  595895 system_pods.go:89] "csi-hostpathplugin-8279f" [3cc6db25-521c-4711-9d36-e22ab6d16249] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0929 11:31:34.596361  595895 system_pods.go:89] "etcd-addons-214441" [1710b264-ccec-4054-8a1c-7d8e5ce49163] Running
	I0929 11:31:34.596365  595895 system_pods.go:89] "kube-apiserver-addons-214441" [302fdd61-6c51-4e6e-a1af-7856ccb9f2ca] Running
	I0929 11:31:34.596369  595895 system_pods.go:89] "kube-controller-manager-addons-214441" [3e9c3a44-d14e-4335-a141-f7c648e6159d] Running
	I0929 11:31:34.596375  595895 system_pods.go:89] "kube-ingress-dns-minikube" [12920e9d-debd-4783-9f79-1c3a345e576e] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0929 11:31:34.596381  595895 system_pods.go:89] "kube-proxy-d9fnb" [229af565-3300-4cb4-8289-f3d9b4a9af81] Running
	I0929 11:31:34.596385  595895 system_pods.go:89] "kube-scheduler-addons-214441" [bcaec156-9259-4787-9d02-01a2203b43ac] Running
	I0929 11:31:34.596390  595895 system_pods.go:89] "metrics-server-85b7d694d7-zlrv7" [003bc9b9-be00-4e08-8af7-3443d0502181] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0929 11:31:34.596398  595895 system_pods.go:89] "nvidia-device-plugin-daemonset-x7b8m" [b96ad59d-d30d-4437-a5c7-ce0c5fc69348] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0929 11:31:34.596404  595895 system_pods.go:89] "registry-66898fdd98-d7zx7" [e0282924-ef3d-48eb-8906-ea10f183b39e] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0929 11:31:34.596409  595895 system_pods.go:89] "registry-creds-764b6fb674-td8pw" [c6001777-2f97-49d8-972b-27d373557795] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0929 11:31:34.596413  595895 system_pods.go:89] "registry-proxy-grb7m" [d79830e7-a640-46a7-9b07-38e39aceac96] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0929 11:31:34.596421  595895 system_pods.go:89] "snapshot-controller-7d9fbc56b8-pw4g9" [74b07f42-da97-4ad5-8fdb-748ab5cfbde2] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0929 11:31:34.596427  595895 system_pods.go:89] "snapshot-controller-7d9fbc56b8-wvh2l" [987a4a45-4d77-43ac-816c-ebc08faf51bc] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0929 11:31:34.596430  595895 system_pods.go:89] "storage-provisioner" [e1a79041-edc7-4f8e-96cb-8b3566893a0e] Running
	I0929 11:31:34.596439  595895 system_pods.go:126] duration metric: took 39.444621ms to wait for k8s-apps to be running ...
	I0929 11:31:34.596450  595895 system_svc.go:44] waiting for kubelet service to be running ....
	I0929 11:31:34.596507  595895 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0929 11:31:34.638029  595895 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0929 11:31:34.638063  595895 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0929 11:31:34.896745  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:35.000193  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:35.038316  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:35.057490  595895 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0929 11:31:35.057521  595895 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0929 11:31:35.300242  595895 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0929 11:31:35.379546  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:35.428677  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:35.535091  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:35.881406  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:35.938231  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:36.039311  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:36.382155  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:36.425663  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:36.535684  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:36.886954  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:36.927490  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:37.044975  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:37.382165  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:37.431026  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:37.547302  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:37.920673  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:37.944368  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:38.063651  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:38.330176  595895 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (4.336121933s)
	W0929 11:31:38.330254  595895 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:31:38.330284  595895 retry.go:31] will retry after 312.007159ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:31:38.330290  595895 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.248696545s)
	I0929 11:31:38.330341  595895 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (3.73381029s)
	I0929 11:31:38.330367  595895 system_svc.go:56] duration metric: took 3.733914032s WaitForService to wait for kubelet
	I0929 11:31:38.330377  595895 kubeadm.go:578] duration metric: took 20.888935766s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0929 11:31:38.330403  595895 node_conditions.go:102] verifying NodePressure condition ...
	I0929 11:31:38.330343  595895 main.go:141] libmachine: Making call to close driver server
	I0929 11:31:38.330449  595895 main.go:141] libmachine: (addons-214441) Calling .Close
	I0929 11:31:38.330448  595895 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (3.030164486s)
	I0929 11:31:38.330495  595895 main.go:141] libmachine: Making call to close driver server
	I0929 11:31:38.330509  595895 main.go:141] libmachine: (addons-214441) Calling .Close
	I0929 11:31:38.330817  595895 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:31:38.330832  595895 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:31:38.330841  595895 main.go:141] libmachine: Making call to close driver server
	I0929 11:31:38.330848  595895 main.go:141] libmachine: (addons-214441) Calling .Close
	I0929 11:31:38.330851  595895 main.go:141] libmachine: (addons-214441) DBG | Closing plugin on server side
	I0929 11:31:38.330882  595895 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:31:38.330895  595895 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:31:38.330903  595895 main.go:141] libmachine: Making call to close driver server
	I0929 11:31:38.330910  595895 main.go:141] libmachine: (addons-214441) Calling .Close
	I0929 11:31:38.331221  595895 main.go:141] libmachine: (addons-214441) DBG | Closing plugin on server side
	I0929 11:31:38.331223  595895 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:31:38.331238  595895 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:31:38.331251  595895 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:31:38.331258  595895 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:31:38.332465  595895 addons.go:479] Verifying addon gcp-auth=true in "addons-214441"
	I0929 11:31:38.334695  595895 out.go:179] * Verifying gcp-auth addon...
	I0929 11:31:38.336858  595895 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0929 11:31:38.341614  595895 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0929 11:31:38.341645  595895 node_conditions.go:123] node cpu capacity is 2
	I0929 11:31:38.341662  595895 node_conditions.go:105] duration metric: took 11.25287ms to run NodePressure ...
	I0929 11:31:38.341688  595895 start.go:241] waiting for startup goroutines ...
	I0929 11:31:38.343873  595895 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0929 11:31:38.343896  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:38.381193  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:38.423947  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:38.537472  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:38.642514  595895 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 11:31:38.843272  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:38.944959  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:38.945123  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:39.033029  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:39.342350  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:39.380435  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:39.424230  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:39.537307  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:39.645310  595895 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.002737784s)
	W0929 11:31:39.645357  595895 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:31:39.645385  595895 retry.go:31] will retry after 298.904966ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:31:39.841477  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:39.879072  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:39.922915  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:39.945025  595895 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 11:31:40.034681  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:40.343272  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:40.382403  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:40.422942  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:40.539442  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:40.844610  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:40.879893  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:40.924951  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:41.033826  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:41.124246  595895 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.179166796s)
	W0929 11:31:41.124315  595895 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:31:41.124339  595895 retry.go:31] will retry after 649.538473ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:31:41.343005  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:41.380641  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:41.425734  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:41.533709  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:41.774560  595895 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 11:31:41.841236  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:41.878527  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:41.924650  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:42.035789  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:42.342468  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:42.380731  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:42.426156  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:42.534471  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:42.785912  595895 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.011289133s)
	W0929 11:31:42.785977  595895 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:31:42.786005  595895 retry.go:31] will retry after 983.289132ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:31:42.842132  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:42.879170  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:42.924415  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:43.036251  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:43.343664  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:43.382521  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:43.423598  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:43.534301  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:43.770317  595895 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 11:31:43.843700  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:43.880339  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:43.925260  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:44.035702  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:44.342152  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:44.380186  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:44.427570  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:44.537930  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:44.812756  595895 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.042397237s)
	W0929 11:31:44.812812  595895 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:31:44.812836  595895 retry.go:31] will retry after 2.137947671s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:31:44.843045  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:44.881899  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:44.924762  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:45.035718  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:45.343550  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:45.378897  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:45.424866  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:45.534338  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:45.841433  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:45.877671  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:45.923645  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:46.034379  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:46.372337  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:46.406356  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:46.426866  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:46.534032  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:46.842343  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:46.879578  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:46.925175  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:46.951146  595895 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 11:31:47.034343  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:47.344240  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:47.382773  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:47.424668  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:47.540037  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:47.843427  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:47.879391  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:47.924262  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:47.960092  595895 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.008893629s)
	W0929 11:31:47.960177  595895 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:31:47.960206  595895 retry.go:31] will retry after 2.504757299s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:31:48.033591  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:48.341481  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:48.378697  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:48.424514  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:48.536592  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:48.842185  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:48.879742  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:48.923614  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:49.034098  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:49.340781  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:49.379506  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:49.423231  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:49.534207  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:49.842436  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:49.877896  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:49.924231  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:50.034614  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:50.341556  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:50.379007  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:50.423685  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:50.465827  595895 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 11:31:50.536792  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:50.843824  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:50.879454  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:50.924711  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:51.035609  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:51.343958  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:51.379841  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:51.424239  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:51.468054  595895 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.002171892s)
	W0929 11:31:51.468114  595895 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:31:51.468140  595895 retry.go:31] will retry after 5.613548218s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:31:51.533585  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:51.963029  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:51.963886  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:51.964026  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:52.060713  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:52.343223  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:52.378836  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:52.424767  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:52.534427  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:52.849585  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:52.879670  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:52.948684  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:53.048366  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:53.346453  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:53.380741  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:53.426760  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:53.533978  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:53.840987  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:53.879766  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:53.924223  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:54.035753  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:54.342742  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:54.378763  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:54.423439  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:54.535260  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:54.841859  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:54.880183  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:54.925299  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:55.033854  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:55.340853  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:55.378822  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:55.424172  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:55.534313  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:55.842189  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:55.879647  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:55.925521  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:56.034145  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:56.341524  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:56.384803  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:56.424070  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:56.533658  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:56.845007  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:56.881917  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:56.944166  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:57.044730  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:57.082647  595895 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 11:31:57.345840  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:57.379131  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:57.425387  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:57.534328  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:57.843711  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:57.879327  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:57.925624  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:58.038058  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:58.345139  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:58.379479  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:58.427479  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:58.431242  595895 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.348544969s)
	W0929 11:31:58.431293  595895 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:31:58.431314  595895 retry.go:31] will retry after 5.599503168s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:31:58.535825  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:58.841717  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:58.878293  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:58.926559  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:59.035878  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:59.341486  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:59.381532  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:59.425077  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:59.532752  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:59.841172  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:59.878180  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:59.923096  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:00.034481  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:00.557941  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:00.559858  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:32:00.559963  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:00.560670  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:00.841990  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:00.879357  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:32:00.926097  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:01.036394  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:01.344642  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:01.379875  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:32:01.425784  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:01.534466  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:01.842499  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:01.878243  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:32:01.924047  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:02.033958  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:02.342377  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:02.380154  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:32:02.423813  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:02.535090  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:02.843862  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:02.879556  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:32:02.924521  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:03.034759  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:03.340099  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:03.378625  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:32:03.423534  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:03.534511  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:03.841201  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:03.878471  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:32:03.924393  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:04.031608  595895 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 11:32:04.037031  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:04.344499  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:04.378709  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:32:04.426297  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:04.536239  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:04.842255  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:04.878783  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:32:04.925876  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:05.037628  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:05.250099  595895 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.218439403s)
	W0929 11:32:05.250163  595895 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:32:05.250186  595895 retry.go:31] will retry after 6.3969875s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:32:05.342875  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:05.380683  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:32:05.424490  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:05.534483  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:05.841804  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:05.880284  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:32:05.923385  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:06.034868  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:06.341952  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:06.378384  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:32:06.426408  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:06.535793  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:06.842154  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:06.880699  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:32:06.924358  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:07.035474  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:07.343686  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:07.378323  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:32:07.423762  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:07.535390  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:07.843851  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:07.881716  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:32:07.927684  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:08.037583  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:08.341340  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:08.380517  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:32:08.424488  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:08.535292  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:08.841002  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:08.879020  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:32:08.924253  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:09.089297  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:09.340800  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:09.377819  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:32:09.423823  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:09.534297  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:09.849243  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:09.950172  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:32:09.950267  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:10.036059  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:10.346922  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:10.379976  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:32:10.424634  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:10.538864  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:10.842015  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:10.879192  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:32:10.925328  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:11.040957  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:11.349029  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:11.380885  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:32:11.452716  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:11.533526  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:11.648223  595895 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 11:32:11.846882  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:11.881994  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:32:11.924898  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:12.037323  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:12.342006  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:12.378476  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:32:12.425404  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:12.544040  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:12.792386  595895 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.144111976s)
	W0929 11:32:12.792447  595895 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:32:12.792475  595895 retry.go:31] will retry after 13.411476283s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:32:12.842021  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:12.880179  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:32:12.924788  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:13.040328  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:13.342434  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:13.378229  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:32:13.423792  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:13.533728  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:13.843276  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:13.881114  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:32:13.924958  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:14.034759  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:14.342679  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:14.391569  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:32:14.496903  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:14.537421  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:14.843175  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:14.880166  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:32:14.923743  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:15.033994  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:15.343313  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:15.378881  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:32:15.423448  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:15.538003  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:15.845026  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:15.879663  595895 kapi.go:107] duration metric: took 42.005359357s to wait for kubernetes.io/minikube-addons=registry ...
	I0929 11:32:15.924537  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:16.034645  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:16.341847  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:16.423671  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:16.542699  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:16.844239  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:16.931285  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:17.038278  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:17.353396  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:17.429078  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:17.543634  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:17.844298  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:17.946425  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:18.041877  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:18.345833  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:18.428431  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:18.540908  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:18.840650  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:18.941953  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:19.044517  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:19.341978  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:19.424948  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:19.534807  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:19.839721  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:19.923994  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:20.033049  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:20.342737  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:20.425291  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:20.540624  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:20.844143  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:20.923381  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:21.034820  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:21.343509  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:21.423753  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:21.533929  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:21.841334  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:21.923232  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:22.035002  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:22.630689  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:22.632895  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:22.632941  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:22.845479  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:22.926876  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:23.038229  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:23.355255  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:23.427225  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:23.538625  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:23.844878  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:23.934777  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:24.035280  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:24.346419  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:24.423729  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:24.534589  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:24.842134  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:24.923902  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:25.034892  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:25.362314  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:25.488458  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:25.587385  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:25.861373  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:25.929934  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:26.034355  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:26.204639  595895 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 11:32:26.361386  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:26.429512  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:26.537022  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:26.843446  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:26.926054  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:27.035634  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:27.344336  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:27.424901  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:27.537642  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:27.644135  595895 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.439429306s)
	W0929 11:32:27.644198  595895 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:32:27.644227  595895 retry.go:31] will retry after 29.327619656s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:32:27.842768  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:27.923415  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:28.034767  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:28.343738  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:28.445503  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:28.546159  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:28.851845  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:28.927009  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:29.033400  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:29.341998  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:29.426197  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:29.537012  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:29.842012  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:29.924188  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:30.034037  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:30.346865  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:30.430853  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:30.542769  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:30.842367  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:30.922904  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:31.033768  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:31.341881  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:31.425338  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:31.535963  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:31.844006  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:31.924398  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:32.034705  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:32.346065  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:32.423672  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:32.534377  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:32.842447  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:32.925931  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:33.034800  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:33.387960  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:33.429171  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:33.546901  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:33.852519  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:33.953288  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:34.035154  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:34.344025  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:34.431259  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:34.536600  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:34.843653  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:34.927609  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:35.036794  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:35.341408  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:35.425312  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:35.541227  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:35.847181  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:35.947699  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:36.035760  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:36.344915  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:36.424144  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:36.535593  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:36.841859  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:36.924975  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:37.037919  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:37.452583  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:37.459370  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:37.537236  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:37.841013  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:37.923280  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:38.036969  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:38.340515  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:38.425769  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:38.549235  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:38.842439  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:38.925062  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:39.035751  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:39.341398  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:39.422778  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:39.534951  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:39.841870  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:39.925988  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:40.034408  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:40.340654  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:40.424350  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:40.535075  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:40.843236  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:40.924921  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:41.034406  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:41.497913  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:41.499293  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:41.535243  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:41.844020  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:41.923065  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:42.045660  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:42.342026  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:42.426493  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:42.535570  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:42.841485  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:42.923010  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:43.039027  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:43.346733  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:43.432195  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:43.540145  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:43.885089  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:43.972714  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:44.068027  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:44.345507  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:44.427061  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:44.535862  595895 kapi.go:107] duration metric: took 1m14.00612311s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0929 11:32:44.842493  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:44.929592  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:45.347246  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:45.424028  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:45.841905  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:45.923701  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:46.347078  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:46.425229  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:46.845817  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:46.925006  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:47.341259  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:47.426132  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:47.845143  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:47.924205  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:48.349502  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:48.452604  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:48.846442  595895 kapi.go:107] duration metric: took 1m10.509578031s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0929 11:32:48.847867  595895 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-214441 cluster.
	I0929 11:32:48.849227  595895 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0929 11:32:48.850374  595895 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0929 11:32:48.946549  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:49.426824  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:49.927802  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:50.426120  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:50.925871  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:51.426655  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:51.927170  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:52.426213  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:52.923791  595895 kapi.go:107] duration metric: took 1m18.504852087s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0929 11:32:56.972597  595895 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W0929 11:32:57.723998  595895 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:32:57.724041  595895 retry.go:31] will retry after 18.741816746s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:33:16.468501  595895 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W0929 11:33:17.218683  595895 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:33:17.218783  595895 main.go:141] libmachine: Making call to close driver server
	I0929 11:33:17.218797  595895 main.go:141] libmachine: (addons-214441) Calling .Close
	I0929 11:33:17.219140  595895 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:33:17.219161  595895 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:33:17.219172  595895 main.go:141] libmachine: Making call to close driver server
	I0929 11:33:17.219180  595895 main.go:141] libmachine: (addons-214441) Calling .Close
	I0929 11:33:17.219203  595895 main.go:141] libmachine: (addons-214441) DBG | Closing plugin on server side
	I0929 11:33:17.219480  595895 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:33:17.219502  595895 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:33:17.219534  595895 main.go:141] libmachine: (addons-214441) DBG | Closing plugin on server side
	W0929 11:33:17.219634  595895 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I0929 11:33:17.221637  595895 out.go:179] * Enabled addons: ingress-dns, storage-provisioner-rancher, storage-provisioner, cloud-spanner, volcano, amd-gpu-device-plugin, metrics-server, registry-creds, nvidia-device-plugin, yakd, default-storageclass, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0929 11:33:17.223007  595895 addons.go:514] duration metric: took 1m59.781528816s for enable addons: enabled=[ingress-dns storage-provisioner-rancher storage-provisioner cloud-spanner volcano amd-gpu-device-plugin metrics-server registry-creds nvidia-device-plugin yakd default-storageclass volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0929 11:33:17.223046  595895 start.go:246] waiting for cluster config update ...
	I0929 11:33:17.223066  595895 start.go:255] writing updated cluster config ...
	I0929 11:33:17.223379  595895 ssh_runner.go:195] Run: rm -f paused
	I0929 11:33:17.229885  595895 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0929 11:33:17.234611  595895 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-fkh52" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 11:33:17.240669  595895 pod_ready.go:94] pod "coredns-66bc5c9577-fkh52" is "Ready"
	I0929 11:33:17.240694  595895 pod_ready.go:86] duration metric: took 6.057488ms for pod "coredns-66bc5c9577-fkh52" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 11:33:17.243134  595895 pod_ready.go:83] waiting for pod "etcd-addons-214441" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 11:33:17.248977  595895 pod_ready.go:94] pod "etcd-addons-214441" is "Ready"
	I0929 11:33:17.249003  595895 pod_ready.go:86] duration metric: took 5.848678ms for pod "etcd-addons-214441" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 11:33:17.251694  595895 pod_ready.go:83] waiting for pod "kube-apiserver-addons-214441" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 11:33:17.257270  595895 pod_ready.go:94] pod "kube-apiserver-addons-214441" is "Ready"
	I0929 11:33:17.257299  595895 pod_ready.go:86] duration metric: took 5.583626ms for pod "kube-apiserver-addons-214441" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 11:33:17.259585  595895 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-214441" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 11:33:17.635253  595895 pod_ready.go:94] pod "kube-controller-manager-addons-214441" is "Ready"
	I0929 11:33:17.635287  595895 pod_ready.go:86] duration metric: took 375.675116ms for pod "kube-controller-manager-addons-214441" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 11:33:17.834921  595895 pod_ready.go:83] waiting for pod "kube-proxy-d9fnb" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 11:33:18.234706  595895 pod_ready.go:94] pod "kube-proxy-d9fnb" is "Ready"
	I0929 11:33:18.234735  595895 pod_ready.go:86] duration metric: took 399.786159ms for pod "kube-proxy-d9fnb" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 11:33:18.435590  595895 pod_ready.go:83] waiting for pod "kube-scheduler-addons-214441" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 11:33:18.834304  595895 pod_ready.go:94] pod "kube-scheduler-addons-214441" is "Ready"
	I0929 11:33:18.834340  595895 pod_ready.go:86] duration metric: took 398.719914ms for pod "kube-scheduler-addons-214441" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 11:33:18.834353  595895 pod_ready.go:40] duration metric: took 1.60442513s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0929 11:33:18.881427  595895 start.go:623] kubectl: 1.34.1, cluster: 1.34.0 (minor skew: 0)
	I0929 11:33:18.883901  595895 out.go:179] * Done! kubectl is now configured to use "addons-214441" cluster and "default" namespace by default
	
	
	==> Docker <==
	Sep 29 11:45:12 addons-214441 dockerd[1525]: time="2025-09-29T11:45:12.136853848Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 11:45:21 addons-214441 dockerd[1525]: time="2025-09-29T11:45:21.055513809Z" level=info msg="ignoring event" container=ec4ac1c4a59a99b911940e7471fd4d62bd648ddf20b864c871d76c778232c25f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 29 11:45:34 addons-214441 dockerd[1525]: time="2025-09-29T11:45:34.176156809Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 11:45:46 addons-214441 dockerd[1525]: time="2025-09-29T11:45:46.027312687Z" level=info msg="Container failed to exit within 30s of signal 15 - using the force" container=31302c4317135dfadbabf2bf5a114745aba05b03367b93c097aab3ee4dda5549
	Sep 29 11:45:46 addons-214441 dockerd[1525]: time="2025-09-29T11:45:46.072925083Z" level=info msg="ignoring event" container=31302c4317135dfadbabf2bf5a114745aba05b03367b93c097aab3ee4dda5549 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 29 11:45:46 addons-214441 dockerd[1525]: time="2025-09-29T11:45:46.221164820Z" level=info msg="ignoring event" container=621898582dfa1d0008fac20d7d4c0701ae058713638593c938b29f4e124362a3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 29 11:45:48 addons-214441 dockerd[1525]: time="2025-09-29T11:45:48.168703145Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 11:46:11 addons-214441 dockerd[1525]: time="2025-09-29T11:46:11.367558686Z" level=info msg="ignoring event" container=868179ee6252a5dba8e6b99f42f6af823fe6a4b9c66fdc59b43f32e79d8b1e91 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 29 11:46:11 addons-214441 dockerd[1525]: time="2025-09-29T11:46:11.386900967Z" level=info msg="ignoring event" container=e805d753e363af635b55ad0eac5930dff6010fe21b404793ac2924ba2a55b33a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 29 11:46:11 addons-214441 cri-dockerd[1389]: time="2025-09-29T11:46:11Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"snapshot-controller-7d9fbc56b8-pw4g9_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	Sep 29 11:46:11 addons-214441 dockerd[1525]: time="2025-09-29T11:46:11.841970258Z" level=info msg="ignoring event" container=34844f808604d1e1bf660fc090e637c4fa94323f91e2b000f974ba40f5228be4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 29 11:46:11 addons-214441 dockerd[1525]: time="2025-09-29T11:46:11.861972984Z" level=info msg="ignoring event" container=5ef4f58a4b6da76f46acc76c8f2c67ff1d86cff0a8085f72ba15b2ae91ae7572 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 29 11:46:12 addons-214441 dockerd[1525]: time="2025-09-29T11:46:12.643852985Z" level=info msg="ignoring event" container=e02a58717cc7c34e665608df8831b5d19027ea03b6e5f40c472104755d710db3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 29 11:46:12 addons-214441 dockerd[1525]: time="2025-09-29T11:46:12.759919356Z" level=info msg="ignoring event" container=5810f70edf860e05f34fae1e1c6c9e50ba849dfc40ef3567e00f2cc4c8e5edc7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 29 11:46:12 addons-214441 dockerd[1525]: time="2025-09-29T11:46:12.802578785Z" level=info msg="ignoring event" container=a8b5f59d15a16bdd95e97d2c98dc5535e308fd10dbf069883c9eef232343c923 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 29 11:46:12 addons-214441 dockerd[1525]: time="2025-09-29T11:46:12.833206037Z" level=info msg="ignoring event" container=0ce41bd4faa5b9d420754a00ed70d95d44e2c248ccce3d05e441b826ad9ca5ae module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 29 11:46:12 addons-214441 dockerd[1525]: time="2025-09-29T11:46:12.843231026Z" level=info msg="ignoring event" container=2514173d96a26a64bc2f370ebdc424a80a127d3669f8e90de43c59274cd4c40b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 29 11:46:12 addons-214441 dockerd[1525]: time="2025-09-29T11:46:12.847896949Z" level=info msg="ignoring event" container=ef4f6e22ce31a3d210c64e6e2168878a0af838a728cd8568545cfb009e06de2a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 29 11:46:12 addons-214441 dockerd[1525]: time="2025-09-29T11:46:12.868039209Z" level=info msg="ignoring event" container=51f0c139f4f775faca9070a021ff165b7b259bd1a79892cd46fca26f0e1e38fe module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 29 11:46:12 addons-214441 dockerd[1525]: time="2025-09-29T11:46:12.886134959Z" level=info msg="ignoring event" container=af544573fc0a75eee29707d2daddbdd6c2826c3aafff9c5dcdbfd156947176b2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 29 11:46:13 addons-214441 cri-dockerd[1389]: time="2025-09-29T11:46:13Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"csi-hostpath-attacher-0_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	Sep 29 11:46:13 addons-214441 dockerd[1525]: time="2025-09-29T11:46:13.298370796Z" level=info msg="ignoring event" container=00ac4103d1658b5d0b777ca306286efb9718a2d802d693696849fb3f1a6d1a32 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 29 11:46:13 addons-214441 cri-dockerd[1389]: time="2025-09-29T11:46:13Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"csi-hostpathplugin-8279f_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	Sep 29 11:46:13 addons-214441 dockerd[1525]: time="2025-09-29T11:46:13.677890534Z" level=info msg="ignoring event" container=9e3b6780764f86d4dfcbc34ecad2fcb1d0c6974584a83258976e2870b417ae91 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 29 11:46:13 addons-214441 dockerd[1525]: time="2025-09-29T11:46:13.709923919Z" level=info msg="ignoring event" container=02a7d350b835390c883f099b69a255e5e52b7740ce8bb3998aa66da8aaa53d17 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	8f0982c238973       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          8 minutes ago       Running             busybox                   0                   66bafac6b9afb       busybox
	9b5cb54a94a47       registry.k8s.io/ingress-nginx/controller@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef             15 minutes ago      Running             controller                0                   8b83af6a32772       ingress-nginx-controller-9cc49f96f-h99dj
	30d73d85a386c       8c217da6734db                                                                                                                15 minutes ago      Exited              patch                     1                   63ec050554699       ingress-nginx-admission-patch-tp6tp
	4182ff3d1e473       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24   15 minutes ago      Exited              create                    0                   f519da4bfec27       ingress-nginx-admission-create-s6nvq
	220ba84adaccb       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:66fdf18cc8a577423b2a36b96a5be40fe690fdb986bfe7875f54edfa9c7d19a5            15 minutes ago      Running             gadget                    0                   95e2903b29637       gadget-xvvvf
	48adb1b2452be       kicbase/minikube-ingress-dns@sha256:d7c3fd25a0ea8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89                         15 minutes ago      Running             minikube-ingress-dns      0                   3ce8cc04a57f5       kube-ingress-dns-minikube
	388ea771a1c89       6e38f40d628db                                                                                                                16 minutes ago      Running             storage-provisioner       0                   a451536f2a3ae       storage-provisioner
	ef7f4d809a410       rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                               16 minutes ago      Running             amd-gpu-device-plugin     0                   efbec0257280a       amd-gpu-device-plugin-7jx7f
	5629c377b6053       52546a367cc9e                                                                                                                16 minutes ago      Running             coredns                   0                   b6c342cfbd0e9       coredns-66bc5c9577-fkh52
	cf32cea215063       df0860106674d                                                                                                                16 minutes ago      Running             kube-proxy                0                   164bb1f35fdbf       kube-proxy-d9fnb
	1b712309a5901       46169d968e920                                                                                                                16 minutes ago      Running             kube-scheduler            0                   16368e958b541       kube-scheduler-addons-214441
	5df8c088591fb       5f1f5298c888d                                                                                                                16 minutes ago      Running             etcd                      0                   0a4ad14786721       etcd-addons-214441
	b5368f01fa760       90550c43ad2bc                                                                                                                16 minutes ago      Running             kube-apiserver            0                   47b3b468b3308       kube-apiserver-addons-214441
	b7a56dc83eb1d       a0af72f2ec6d6                                                                                                                16 minutes ago      Running             kube-controller-manager   0                   8a7efdf44079d       kube-controller-manager-addons-214441
	
	
	==> controller_ingress [9b5cb54a94a4] <==
	I0929 11:32:45.037639       7 status.go:224] "POD is not ready" pod="ingress-nginx/ingress-nginx-controller-9cc49f96f-h99dj" node="addons-214441"
	W0929 11:39:51.373839       7 controller.go:1126] Error obtaining Endpoints for Service "default/nginx": no object matching key "default/nginx" in local store
	I0929 11:39:51.377315       7 main.go:107] "successfully validated configuration, accepting" ingress="default/nginx-ingress"
	I0929 11:39:51.383910       7 store.go:443] "Found valid IngressClass" ingress="default/nginx-ingress" ingressclass="nginx"
	W0929 11:39:51.384731       7 controller.go:1126] Error obtaining Endpoints for Service "default/nginx": no object matching key "default/nginx" in local store
	I0929 11:39:51.386972       7 controller.go:214] "Configuration changes detected, backend reload required"
	I0929 11:39:51.388223       7 event.go:377] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"nginx-ingress", UID:"6c60e7a0-fa15-408e-810a-a4af1c88fe08", APIVersion:"networking.k8s.io/v1", ResourceVersion:"2366", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync
	I0929 11:39:51.444940       7 controller.go:228] "Backend successfully reloaded"
	I0929 11:39:51.450504       7 event.go:377] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-9cc49f96f-h99dj", UID:"2bde8bfa-47f0-48da-9e63-cd8e2a0a38c6", APIVersion:"v1", ResourceVersion:"784", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	W0929 11:39:54.719235       7 controller.go:1232] Service "default/nginx" does not have any active Endpoint.
	I0929 11:39:54.719924       7 controller.go:214] "Configuration changes detected, backend reload required"
	I0929 11:39:54.771503       7 controller.go:228] "Backend successfully reloaded"
	I0929 11:39:54.772049       7 event.go:377] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-9cc49f96f-h99dj", UID:"2bde8bfa-47f0-48da-9e63-cd8e2a0a38c6", APIVersion:"v1", ResourceVersion:"784", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	W0929 11:39:58.057011       7 controller.go:1232] Service "default/nginx" does not have any active Endpoint.
	W0929 11:40:01.385065       7 controller.go:1232] Service "default/nginx" does not have any active Endpoint.
	W0929 11:40:04.718802       7 controller.go:1232] Service "default/nginx" does not have any active Endpoint.
	W0929 11:40:08.052750       7 controller.go:1232] Service "default/nginx" does not have any active Endpoint.
	W0929 11:40:11.385651       7 controller.go:1232] Service "default/nginx" does not have any active Endpoint.
	I0929 11:40:44.966647       7 status.go:309] "updating Ingress status" namespace="default" ingress="nginx-ingress" currentValue=null newValue=[{"ip":"192.168.39.76"}]
	I0929 11:40:44.973434       7 event.go:377] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"nginx-ingress", UID:"6c60e7a0-fa15-408e-810a-a4af1c88fe08", APIVersion:"networking.k8s.io/v1", ResourceVersion:"2672", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync
	W0929 11:40:44.974230       7 controller.go:1232] Service "default/nginx" does not have any active Endpoint.
	W0929 11:42:12.884706       7 controller.go:1232] Service "default/nginx" does not have any active Endpoint.
	W0929 11:42:23.602348       7 controller.go:1232] Service "default/nginx" does not have any active Endpoint.
	W0929 11:46:12.415196       7 controller.go:1232] Service "default/nginx" does not have any active Endpoint.
	W0929 11:46:15.747727       7 controller.go:1232] Service "default/nginx" does not have any active Endpoint.
	
	
	==> coredns [5629c377b605] <==
	[INFO] 10.244.0.7:52212 - 14403 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.001145753s
	[INFO] 10.244.0.7:52212 - 34526 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.001027976s
	[INFO] 10.244.0.7:52212 - 40091 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.002958291s
	[INFO] 10.244.0.7:52212 - 8101 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000112715s
	[INFO] 10.244.0.7:52212 - 55833 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000201304s
	[INFO] 10.244.0.7:52212 - 46374 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000813986s
	[INFO] 10.244.0.7:52212 - 13461 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.00014644s
	[INFO] 10.244.0.7:58134 - 57276 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000168682s
	[INFO] 10.244.0.7:58134 - 56902 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000087725s
	[INFO] 10.244.0.7:45806 - 23713 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000124662s
	[INFO] 10.244.0.7:45806 - 23950 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000142715s
	[INFO] 10.244.0.7:42777 - 55128 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000080735s
	[INFO] 10.244.0.7:42777 - 54892 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000216294s
	[INFO] 10.244.0.7:36398 - 14124 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000321419s
	[INFO] 10.244.0.7:36398 - 13929 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000550817s
	[INFO] 10.244.0.26:41550 - 7840 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00065483s
	[INFO] 10.244.0.26:48585 - 52888 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000202217s
	[INFO] 10.244.0.26:53114 - 55168 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000190191s
	[INFO] 10.244.0.26:47096 - 26187 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000662248s
	[INFO] 10.244.0.26:48999 - 38178 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00015298s
	[INFO] 10.244.0.26:58286 - 39587 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000285241s
	[INFO] 10.244.0.26:45238 - 61249 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.003642198s
	[INFO] 10.244.0.26:33573 - 52185 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 534 0.003922074s
	[INFO] 10.244.0.30:45249 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.002086838s
	[INFO] 10.244.0.30:35918 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000164605s
	
	
	==> describe nodes <==
	Name:               addons-214441
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-214441
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b8c533eb3dce8338d3d4e7231ea97d1c44ed6f81
	                    minikube.k8s.io/name=addons-214441
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_29T11_31_12_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-214441
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Sep 2025 11:31:09 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-214441
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Sep 2025 11:47:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Sep 2025 11:46:11 +0000   Mon, 29 Sep 2025 11:31:06 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Sep 2025 11:46:11 +0000   Mon, 29 Sep 2025 11:31:06 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Sep 2025 11:46:11 +0000   Mon, 29 Sep 2025 11:31:06 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Sep 2025 11:46:11 +0000   Mon, 29 Sep 2025 11:31:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.76
	  Hostname:    addons-214441
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008588Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008588Ki
	  pods:               110
	System Info:
	  Machine ID:                 44179717398847cdb8d861dffe58e059
	  System UUID:                44179717-3988-47cd-b8d8-61dffe58e059
	  Boot ID:                    f083535d-5807-413a-9a6b-1a0bbe2d4432
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://28.4.0
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m20s
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m2s
	  default                     task-pv-pod                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m45s
	  gadget                      gadget-xvvvf                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  ingress-nginx               ingress-nginx-controller-9cc49f96f-h99dj    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         16m
	  kube-system                 amd-gpu-device-plugin-7jx7f                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 coredns-66bc5c9577-fkh52                    100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     16m
	  kube-system                 etcd-addons-214441                          100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         16m
	  kube-system                 kube-apiserver-addons-214441                250m (12%)    0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-controller-manager-addons-214441       200m (10%)    0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-proxy-d9fnb                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-scheduler-addons-214441                100m (5%)     0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             260Mi (6%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 16m   kube-proxy       
	  Normal  Starting                 16m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  16m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  16m   kubelet          Node addons-214441 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    16m   kubelet          Node addons-214441 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     16m   kubelet          Node addons-214441 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           16m   node-controller  Node addons-214441 event: Registered Node addons-214441 in Controller
	  Normal  NodeReady                16m   kubelet          Node addons-214441 status is now: NodeReady
	
	
	==> dmesg <==
	[  +5.142447] kauditd_printk_skb: 20 callbacks suppressed
	[Sep29 11:32] kauditd_printk_skb: 41 callbacks suppressed
	[  +6.199632] kauditd_printk_skb: 38 callbacks suppressed
	[  +1.030429] kauditd_printk_skb: 78 callbacks suppressed
	[  +5.195773] kauditd_printk_skb: 75 callbacks suppressed
	[  +5.274224] kauditd_printk_skb: 150 callbacks suppressed
	[  +5.780886] kauditd_printk_skb: 68 callbacks suppressed
	[  +8.295767] kauditd_printk_skb: 56 callbacks suppressed
	[Sep29 11:39] kauditd_printk_skb: 2 callbacks suppressed
	[ +10.045350] kauditd_printk_skb: 59 callbacks suppressed
	[ +11.893143] kauditd_printk_skb: 41 callbacks suppressed
	[  +5.745446] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.704785] kauditd_printk_skb: 81 callbacks suppressed
	[Sep29 11:40] kauditd_printk_skb: 79 callbacks suppressed
	[  +2.308317] kauditd_printk_skb: 113 callbacks suppressed
	[  +1.203541] kauditd_printk_skb: 47 callbacks suppressed
	[Sep29 11:42] kauditd_printk_skb: 27 callbacks suppressed
	[  +9.517499] kauditd_printk_skb: 40 callbacks suppressed
	[  +5.729582] kauditd_printk_skb: 22 callbacks suppressed
	[Sep29 11:44] kauditd_printk_skb: 26 callbacks suppressed
	[Sep29 11:45] kauditd_printk_skb: 9 callbacks suppressed
	[ +20.688994] kauditd_printk_skb: 26 callbacks suppressed
	[ +25.065246] kauditd_printk_skb: 9 callbacks suppressed
	[  +6.842485] kauditd_printk_skb: 9 callbacks suppressed
	[Sep29 11:46] kauditd_printk_skb: 102 callbacks suppressed
	
	
	==> etcd [5df8c088591f] <==
	{"level":"warn","ts":"2025-09-29T11:32:00.549775Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"133.256178ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-29T11:32:00.549795Z","caller":"traceutil/trace.go:172","msg":"trace[872905781] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1062; }","duration":"133.278789ms","start":"2025-09-29T11:32:00.416510Z","end":"2025-09-29T11:32:00.549789Z","steps":["trace[872905781] 'agreement among raft nodes before linearized reading'  (duration: 133.240765ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-29T11:32:22.619881Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"283.951682ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-29T11:32:22.619953Z","caller":"traceutil/trace.go:172","msg":"trace[256565612] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1138; }","duration":"284.054314ms","start":"2025-09-29T11:32:22.335884Z","end":"2025-09-29T11:32:22.619939Z","steps":["trace[256565612] 'range keys from in-memory index tree'  (duration: 283.898213ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-29T11:32:22.620417Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"203.038923ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-29T11:32:22.620455Z","caller":"traceutil/trace.go:172","msg":"trace[2141218366] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1138; }","duration":"203.079865ms","start":"2025-09-29T11:32:22.417365Z","end":"2025-09-29T11:32:22.620444Z","steps":["trace[2141218366] 'range keys from in-memory index tree'  (duration: 202.851561ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-29T11:32:37.446139Z","caller":"traceutil/trace.go:172","msg":"trace[1518739598] linearizableReadLoop","detail":"{readStateIndex:1281; appliedIndex:1281; }","duration":"111.376689ms","start":"2025-09-29T11:32:37.334743Z","end":"2025-09-29T11:32:37.446120Z","steps":["trace[1518739598] 'read index received'  (duration: 111.370356ms)","trace[1518739598] 'applied index is now lower than readState.Index'  (duration: 5.449µs)"],"step_count":2}
	{"level":"warn","ts":"2025-09-29T11:32:37.446365Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"111.596508ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-29T11:32:37.446409Z","caller":"traceutil/trace.go:172","msg":"trace[333303529] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1250; }","duration":"111.664223ms","start":"2025-09-29T11:32:37.334737Z","end":"2025-09-29T11:32:37.446401Z","steps":["trace[333303529] 'agreement among raft nodes before linearized reading'  (duration: 111.566754ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-29T11:32:37.447956Z","caller":"traceutil/trace.go:172","msg":"trace[1818807407] transaction","detail":"{read_only:false; response_revision:1251; number_of_response:1; }","duration":"216.083326ms","start":"2025-09-29T11:32:37.231864Z","end":"2025-09-29T11:32:37.447947Z","steps":["trace[1818807407] 'process raft request'  (duration: 214.333833ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-29T11:32:41.490882Z","caller":"traceutil/trace.go:172","msg":"trace[1943079177] linearizableReadLoop","detail":"{readStateIndex:1295; appliedIndex:1295; }","duration":"156.252408ms","start":"2025-09-29T11:32:41.334599Z","end":"2025-09-29T11:32:41.490852Z","steps":["trace[1943079177] 'read index received'  (duration: 156.245254ms)","trace[1943079177] 'applied index is now lower than readState.Index'  (duration: 4.49µs)"],"step_count":2}
	{"level":"warn","ts":"2025-09-29T11:32:41.491088Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"156.469181ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-29T11:32:41.491110Z","caller":"traceutil/trace.go:172","msg":"trace[366978766] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1264; }","duration":"156.509563ms","start":"2025-09-29T11:32:41.334595Z","end":"2025-09-29T11:32:41.491105Z","steps":["trace[366978766] 'agreement among raft nodes before linearized reading'  (duration: 156.436502ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-29T11:32:41.491567Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-09-29T11:32:41.150207Z","time spent":"341.358415ms","remote":"127.0.0.1:41482","response type":"/etcdserverpb.Lease/LeaseGrant","request count":-1,"request size":-1,"response count":-1,"response size":-1,"request content":""}
	{"level":"info","ts":"2025-09-29T11:39:57.948345Z","caller":"traceutil/trace.go:172","msg":"trace[1591406496] linearizableReadLoop","detail":"{readStateIndex:2551; appliedIndex:2551; }","duration":"124.72426ms","start":"2025-09-29T11:39:57.823478Z","end":"2025-09-29T11:39:57.948202Z","steps":["trace[1591406496] 'read index received'  (duration: 124.71863ms)","trace[1591406496] 'applied index is now lower than readState.Index'  (duration: 4.802µs)"],"step_count":2}
	{"level":"warn","ts":"2025-09-29T11:39:57.948549Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"125.025613ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-29T11:39:57.948597Z","caller":"traceutil/trace.go:172","msg":"trace[612703964] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:2421; }","duration":"125.116152ms","start":"2025-09-29T11:39:57.823474Z","end":"2025-09-29T11:39:57.948590Z","steps":["trace[612703964] 'agreement among raft nodes before linearized reading'  (duration: 124.997233ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-29T11:39:57.949437Z","caller":"traceutil/trace.go:172","msg":"trace[1306847484] transaction","detail":"{read_only:false; number_of_response:1; response_revision:2422; }","duration":"296.693601ms","start":"2025-09-29T11:39:57.652733Z","end":"2025-09-29T11:39:57.949427Z","steps":["trace[1306847484] 'process raft request'  (duration: 296.121623ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-29T11:39:58.302377Z","caller":"traceutil/trace.go:172","msg":"trace[126438438] transaction","detail":"{read_only:false; response_revision:2433; number_of_response:1; }","duration":"116.690338ms","start":"2025-09-29T11:39:58.185669Z","end":"2025-09-29T11:39:58.302359Z","steps":["trace[126438438] 'process raft request'  (duration: 107.946386ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-29T11:41:07.514630Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1800}
	{"level":"info","ts":"2025-09-29T11:41:07.635361Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1800,"took":"119.419717ms","hash":3783191704,"current-db-size-bytes":8732672,"current-db-size":"8.7 MB","current-db-size-in-use-bytes":5963776,"current-db-size-in-use":"6.0 MB"}
	{"level":"info","ts":"2025-09-29T11:41:07.635428Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":3783191704,"revision":1800,"compact-revision":-1}
	{"level":"info","ts":"2025-09-29T11:46:07.523170Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":2728}
	{"level":"info","ts":"2025-09-29T11:46:07.550978Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":2728,"took":"26.544612ms","hash":3628222510,"current-db-size-bytes":8732672,"current-db-size":"8.7 MB","current-db-size-in-use-bytes":4538368,"current-db-size-in-use":"4.5 MB"}
	{"level":"info","ts":"2025-09-29T11:46:07.551024Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":3628222510,"revision":2728,"compact-revision":1800}
	
	
	==> kernel <==
	 11:47:53 up 17 min,  0 users,  load average: 0.66, 0.61, 0.61
	Linux addons-214441 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Sep 18 15:48:18 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [b5368f01fa76] <==
	I0929 11:41:04.368312       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:41:09.156786       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0929 11:41:32.070520       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:42:20.474077       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:42:56.312150       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:43:33.051574       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:44:06.773562       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:44:43.393063       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:45:30.439510       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:45:44.970907       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:46:11.075945       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0929 11:46:11.076113       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0929 11:46:11.118376       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0929 11:46:11.118453       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0929 11:46:11.134732       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0929 11:46:11.134803       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0929 11:46:11.147993       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0929 11:46:11.148481       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0929 11:46:11.190942       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0929 11:46:11.190994       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0929 11:46:12.149129       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0929 11:46:12.190965       1 cacher.go:182] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	W0929 11:46:12.304066       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	I0929 11:46:57.458885       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:46:59.168722       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	
	
	==> kube-controller-manager [b7a56dc83eb1] <==
	E0929 11:47:12.177248       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 11:47:12.178591       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 11:47:15.157423       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 11:47:15.158962       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 11:47:15.943857       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 11:47:15.945496       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 11:47:16.203917       1 pv_controller.go:1587] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"local-path\" not found" logger="persistentvolume-binder-controller" PVC="default/test-pvc"
	E0929 11:47:16.834143       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 11:47:16.835472       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 11:47:17.367424       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 11:47:17.368881       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 11:47:19.255339       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 11:47:19.256779       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 11:47:31.204905       1 pv_controller.go:1587] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"local-path\" not found" logger="persistentvolume-binder-controller" PVC="default/test-pvc"
	E0929 11:47:31.422851       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 11:47:31.424948       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 11:47:42.816780       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 11:47:42.818363       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 11:47:44.042149       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 11:47:44.043586       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 11:47:46.205702       1 pv_controller.go:1587] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"local-path\" not found" logger="persistentvolume-binder-controller" PVC="default/test-pvc"
	E0929 11:47:46.824817       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 11:47:46.827526       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 11:47:50.147837       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 11:47:50.149218       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	
	
	==> kube-proxy [cf32cea21506] <==
	I0929 11:31:18.966107       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0929 11:31:19.067553       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0929 11:31:19.067585       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.76"]
	E0929 11:31:19.067663       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0929 11:31:19.367843       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I0929 11:31:19.367925       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0929 11:31:19.367957       1 server_linux.go:132] "Using iptables Proxier"
	I0929 11:31:19.410838       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0929 11:31:19.411105       1 server.go:527] "Version info" version="v1.34.0"
	I0929 11:31:19.411117       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 11:31:19.438109       1 config.go:200] "Starting service config controller"
	I0929 11:31:19.438145       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0929 11:31:19.438165       1 config.go:106] "Starting endpoint slice config controller"
	I0929 11:31:19.438169       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0929 11:31:19.438197       1 config.go:403] "Starting serviceCIDR config controller"
	I0929 11:31:19.438201       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0929 11:31:19.443612       1 config.go:309] "Starting node config controller"
	I0929 11:31:19.443644       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0929 11:31:19.443650       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0929 11:31:19.552512       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0929 11:31:19.552650       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0929 11:31:19.639397       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [1b712309a590] <==
	E0929 11:31:09.221196       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0929 11:31:09.221236       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0929 11:31:09.222033       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E0929 11:31:09.225006       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0929 11:31:09.225514       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E0929 11:31:09.225802       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0929 11:31:09.225865       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0929 11:31:09.225922       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0929 11:31:09.226012       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0929 11:31:09.226045       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0929 11:31:10.048406       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0929 11:31:10.133629       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0929 11:31:10.190360       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0929 11:31:10.277104       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0929 11:31:10.293798       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0929 11:31:10.302970       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0929 11:31:10.326331       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0929 11:31:10.346485       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E0929 11:31:10.373940       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0929 11:31:10.450205       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0929 11:31:10.476705       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0929 11:31:10.548049       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0929 11:31:10.584420       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0929 11:31:10.696768       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I0929 11:31:12.791660       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 29 11:46:14 addons-214441 kubelet[2504]: I0929 11:46:14.990358    2504 scope.go:117] "RemoveContainer" containerID="e02a58717cc7c34e665608df8831b5d19027ea03b6e5f40c472104755d710db3"
	Sep 29 11:46:15 addons-214441 kubelet[2504]: I0929 11:46:15.046121    2504 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-7jx7f" secret="" err="secret \"gcp-auth\" not found"
	Sep 29 11:46:16 addons-214441 kubelet[2504]: E0929 11:46:16.046181    2504 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="aff7bf59-352b-45d6-9449-f442a6b25e27"
	Sep 29 11:46:16 addons-214441 kubelet[2504]: I0929 11:46:16.058431    2504 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="27d9af25-8d41-4c37-9359-c7bd4f88f09f" path="/var/lib/kubelet/pods/27d9af25-8d41-4c37-9359-c7bd4f88f09f/volumes"
	Sep 29 11:46:16 addons-214441 kubelet[2504]: I0929 11:46:16.059190    2504 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cc6db25-521c-4711-9d36-e22ab6d16249" path="/var/lib/kubelet/pods/3cc6db25-521c-4711-9d36-e22ab6d16249/volumes"
	Sep 29 11:46:16 addons-214441 kubelet[2504]: I0929 11:46:16.060040    2504 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4f24e046-d5b2-41ff-a051-b0a572bf9348" path="/var/lib/kubelet/pods/4f24e046-d5b2-41ff-a051-b0a572bf9348/volumes"
	Sep 29 11:46:17 addons-214441 kubelet[2504]: E0929 11:46:17.049441    2504 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="182f1b86-e027-4d79-a5a9-272a05688c3b"
	Sep 29 11:46:27 addons-214441 kubelet[2504]: E0929 11:46:27.046352    2504 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="aff7bf59-352b-45d6-9449-f442a6b25e27"
	Sep 29 11:46:30 addons-214441 kubelet[2504]: I0929 11:46:30.046106    2504 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Sep 29 11:46:32 addons-214441 kubelet[2504]: E0929 11:46:32.049830    2504 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="182f1b86-e027-4d79-a5a9-272a05688c3b"
	Sep 29 11:46:38 addons-214441 kubelet[2504]: E0929 11:46:38.046134    2504 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="aff7bf59-352b-45d6-9449-f442a6b25e27"
	Sep 29 11:46:44 addons-214441 kubelet[2504]: E0929 11:46:44.050395    2504 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="182f1b86-e027-4d79-a5a9-272a05688c3b"
	Sep 29 11:46:51 addons-214441 kubelet[2504]: E0929 11:46:51.046040    2504 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="aff7bf59-352b-45d6-9449-f442a6b25e27"
	Sep 29 11:46:58 addons-214441 kubelet[2504]: E0929 11:46:58.050593    2504 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="182f1b86-e027-4d79-a5a9-272a05688c3b"
	Sep 29 11:47:03 addons-214441 kubelet[2504]: W0929 11:47:03.411336    2504 logging.go:55] [core] [Channel #68 SubChannel #69]grpc: addrConn.createTransport failed to connect to {Addr: "/var/lib/kubelet/plugins/csi-hostpath/csi.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/lib/kubelet/plugins/csi-hostpath/csi.sock: connect: connection refused"
	Sep 29 11:47:06 addons-214441 kubelet[2504]: E0929 11:47:06.046466    2504 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="aff7bf59-352b-45d6-9449-f442a6b25e27"
	Sep 29 11:47:10 addons-214441 kubelet[2504]: E0929 11:47:10.051161    2504 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="182f1b86-e027-4d79-a5a9-272a05688c3b"
	Sep 29 11:47:17 addons-214441 kubelet[2504]: E0929 11:47:17.045828    2504 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="aff7bf59-352b-45d6-9449-f442a6b25e27"
	Sep 29 11:47:23 addons-214441 kubelet[2504]: I0929 11:47:23.045874    2504 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-7jx7f" secret="" err="secret \"gcp-auth\" not found"
	Sep 29 11:47:25 addons-214441 kubelet[2504]: E0929 11:47:25.049803    2504 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="182f1b86-e027-4d79-a5a9-272a05688c3b"
	Sep 29 11:47:30 addons-214441 kubelet[2504]: E0929 11:47:30.046442    2504 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="aff7bf59-352b-45d6-9449-f442a6b25e27"
	Sep 29 11:47:38 addons-214441 kubelet[2504]: E0929 11:47:38.052118    2504 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="182f1b86-e027-4d79-a5a9-272a05688c3b"
	Sep 29 11:47:43 addons-214441 kubelet[2504]: E0929 11:47:43.046572    2504 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="aff7bf59-352b-45d6-9449-f442a6b25e27"
	Sep 29 11:47:49 addons-214441 kubelet[2504]: I0929 11:47:49.045156    2504 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Sep 29 11:47:49 addons-214441 kubelet[2504]: E0929 11:47:49.049140    2504 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="182f1b86-e027-4d79-a5a9-272a05688c3b"
	
	
	==> storage-provisioner [388ea771a1c8] <==
	W0929 11:47:27.709612       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:47:29.712935       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:47:29.719108       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:47:31.723998       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:47:31.730331       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:47:33.734161       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:47:33.742057       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:47:35.747798       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:47:35.756556       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:47:37.760751       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:47:37.767690       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:47:39.771905       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:47:39.778696       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:47:41.782609       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:47:41.791022       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:47:43.795023       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:47:43.801401       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:47:45.807038       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:47:45.816095       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:47:47.822479       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:47:47.839867       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:47:49.844001       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:47:49.853071       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:47:51.859359       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:47:51.866408       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-214441 -n addons-214441
helpers_test.go:269: (dbg) Run:  kubectl --context addons-214441 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: nginx task-pv-pod test-local-path ingress-nginx-admission-create-s6nvq ingress-nginx-admission-patch-tp6tp
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-214441 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-s6nvq ingress-nginx-admission-patch-tp6tp
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-214441 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-s6nvq ingress-nginx-admission-patch-tp6tp: exit status 1 (96.088749ms)

                                                
                                                
-- stdout --
	Name:             nginx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-214441/192.168.39.76
	Start Time:       Mon, 29 Sep 2025 11:39:51 +0000
	Labels:           run=nginx
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.28
	IPs:
	  IP:  10.244.0.28
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-rdmgz (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-rdmgz:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  8m3s                  default-scheduler  Successfully assigned default/nginx to addons-214441
	  Warning  Failed     8m2s                  kubelet            Failed to pull image "docker.io/nginx:alpine": toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    5m9s (x5 over 8m2s)   kubelet            Pulling image "docker.io/nginx:alpine"
	  Warning  Failed     5m9s (x5 over 8m2s)   kubelet            Error: ErrImagePull
	  Warning  Failed     5m9s (x4 over 7m48s)  kubelet            Failed to pull image "docker.io/nginx:alpine": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    3m1s (x21 over 8m1s)  kubelet            Back-off pulling image "docker.io/nginx:alpine"
	  Warning  Failed     3m1s (x21 over 8m1s)  kubelet            Error: ImagePullBackOff
	
	
	Name:             task-pv-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-214441/192.168.39.76
	Start Time:       Mon, 29 Sep 2025 11:40:08 +0000
	Labels:           app=task-pv-pod
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.31
	IPs:
	  IP:  10.244.0.31
	Containers:
	  task-pv-container:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           80/TCP (http-server)
	    Host Port:      0/TCP (http-server)
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /usr/share/nginx/html from task-pv-storage (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-kt6ld (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  task-pv-storage:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  hpvc
	    ReadOnly:   false
	  kube-api-access-kt6ld:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  7m46s                   default-scheduler  Successfully assigned default/task-pv-pod to addons-214441
	  Warning  Failed     7m2s                    kubelet            Failed to pull image "docker.io/nginx": toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    4m51s (x5 over 7m46s)   kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     4m51s (x4 over 7m46s)   kubelet            Failed to pull image "docker.io/nginx": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     4m51s (x5 over 7m46s)   kubelet            Error: ErrImagePull
	  Warning  Failed     2m42s (x20 over 7m45s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    2m29s (x21 over 7m45s)  kubelet            Back-off pulling image "docker.io/nginx"
	
	
	Name:             test-local-path
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           run=test-local-path
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  busybox:
	    Image:      busybox:stable
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sh
	      -c
	      echo 'local-path-provisioner' > /test/file1
	    Environment:  <none>
	    Mounts:
	      /test from data (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-tffd7 (ro)
	Volumes:
	  data:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  test-pvc
	    ReadOnly:   false
	  kube-api-access-tffd7:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:                      <none>

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-s6nvq" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-tp6tp" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-214441 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-s6nvq ingress-nginx-admission-patch-tp6tp: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-214441 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-214441 addons disable ingress-dns --alsologtostderr -v=1: (1.127708555s)
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-214441 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-214441 addons disable ingress --alsologtostderr -v=1: (7.815827006s)
--- FAIL: TestAddons/parallel/Ingress (492.04s)

                                                
                                    
x
+
TestAddons/parallel/CSI (373.82s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0929 11:40:04.841095  595293 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0929 11:40:04.850229  595293 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0929 11:40:04.850262  595293 kapi.go:107] duration metric: took 9.192376ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 9.205079ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-214441 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-214441 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [aff7bf59-352b-45d6-9449-f442a6b25e27] Pending
helpers_test.go:352: "task-pv-pod" [aff7bf59-352b-45d6-9449-f442a6b25e27] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:337: TestAddons/parallel/CSI: WARNING: pod list for "default" "app=task-pv-pod" returned: client rate limiter Wait returned an error: context deadline exceeded
addons_test.go:567: ***** TestAddons/parallel/CSI: pod "app=task-pv-pod" failed to start within 6m0s: context deadline exceeded ****
addons_test.go:567: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-214441 -n addons-214441
addons_test.go:567: TestAddons/parallel/CSI: showing logs for failed pods as of 2025-09-29 11:46:08.475729457 +0000 UTC m=+954.630463131
addons_test.go:567: (dbg) Run:  kubectl --context addons-214441 describe po task-pv-pod -n default
addons_test.go:567: (dbg) kubectl --context addons-214441 describe po task-pv-pod -n default:
Name:             task-pv-pod
Namespace:        default
Priority:         0
Service Account:  default
Node:             addons-214441/192.168.39.76
Start Time:       Mon, 29 Sep 2025 11:40:08 +0000
Labels:           app=task-pv-pod
Annotations:      <none>
Status:           Pending
IP:               10.244.0.31
IPs:
IP:  10.244.0.31
Containers:
task-pv-container:
Container ID:   
Image:          docker.io/nginx
Image ID:       
Port:           80/TCP (http-server)
Host Port:      0/TCP (http-server)
State:          Waiting
Reason:       ErrImagePull
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/usr/share/nginx/html from task-pv-storage (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-kt6ld (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
task-pv-storage:
Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName:  hpvc
ReadOnly:   false
kube-api-access-kt6ld:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  6m                    default-scheduler  Successfully assigned default/task-pv-pod to addons-214441
Warning  Failed     5m16s                 kubelet            Failed to pull image "docker.io/nginx": toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   Pulling    3m5s (x5 over 6m)     kubelet            Pulling image "docker.io/nginx"
Warning  Failed     3m5s (x4 over 6m)     kubelet            Failed to pull image "docker.io/nginx": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     3m5s (x5 over 6m)     kubelet            Error: ErrImagePull
Warning  Failed     56s (x20 over 5m59s)  kubelet            Error: ImagePullBackOff
Normal   BackOff    43s (x21 over 5m59s)  kubelet            Back-off pulling image "docker.io/nginx"
addons_test.go:567: (dbg) Run:  kubectl --context addons-214441 logs task-pv-pod -n default
addons_test.go:567: (dbg) Non-zero exit: kubectl --context addons-214441 logs task-pv-pod -n default: exit status 1 (82.78454ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "task-pv-container" in pod "task-pv-pod" is waiting to start: image can't be pulled

                                                
                                                
** /stderr **
addons_test.go:567: kubectl --context addons-214441 logs task-pv-pod -n default: exit status 1
addons_test.go:568: failed waiting for pod task-pv-pod: app=task-pv-pod within 6m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/CSI]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-214441 -n addons-214441
helpers_test.go:252: <<< TestAddons/parallel/CSI FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/CSI]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-214441 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-214441 logs -n 25: (1.076573771s)
helpers_test.go:260: TestAddons/parallel/CSI logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-only-383930                                                                                                                                                                                                                                                                                                                                                                                                                                                   │ download-only-383930 │ jenkins │ v1.37.0 │ 29 Sep 25 11:30 UTC │ 29 Sep 25 11:30 UTC │
	│ start   │ -o=json --download-only -p download-only-221115 --force --alsologtostderr --kubernetes-version=v1.34.0 --container-runtime=docker --driver=kvm2  --auto-update-drivers=false                                                                                                                                                                                                                                                                                              │ download-only-221115 │ jenkins │ v1.37.0 │ 29 Sep 25 11:30 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                     │ minikube             │ jenkins │ v1.37.0 │ 29 Sep 25 11:30 UTC │ 29 Sep 25 11:30 UTC │
	│ delete  │ -p download-only-221115                                                                                                                                                                                                                                                                                                                                                                                                                                                   │ download-only-221115 │ jenkins │ v1.37.0 │ 29 Sep 25 11:30 UTC │ 29 Sep 25 11:30 UTC │
	│ delete  │ -p download-only-383930                                                                                                                                                                                                                                                                                                                                                                                                                                                   │ download-only-383930 │ jenkins │ v1.37.0 │ 29 Sep 25 11:30 UTC │ 29 Sep 25 11:30 UTC │
	│ delete  │ -p download-only-221115                                                                                                                                                                                                                                                                                                                                                                                                                                                   │ download-only-221115 │ jenkins │ v1.37.0 │ 29 Sep 25 11:30 UTC │ 29 Sep 25 11:30 UTC │
	│ start   │ --download-only -p binary-mirror-005122 --alsologtostderr --binary-mirror http://127.0.0.1:35607 --driver=kvm2  --auto-update-drivers=false                                                                                                                                                                                                                                                                                                                               │ binary-mirror-005122 │ jenkins │ v1.37.0 │ 29 Sep 25 11:30 UTC │                     │
	│ delete  │ -p binary-mirror-005122                                                                                                                                                                                                                                                                                                                                                                                                                                                   │ binary-mirror-005122 │ jenkins │ v1.37.0 │ 29 Sep 25 11:30 UTC │ 29 Sep 25 11:30 UTC │
	│ addons  │ disable dashboard -p addons-214441                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-214441        │ jenkins │ v1.37.0 │ 29 Sep 25 11:30 UTC │                     │
	│ addons  │ enable dashboard -p addons-214441                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-214441        │ jenkins │ v1.37.0 │ 29 Sep 25 11:30 UTC │                     │
	│ start   │ -p addons-214441 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --auto-update-drivers=false --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-214441        │ jenkins │ v1.37.0 │ 29 Sep 25 11:30 UTC │ 29 Sep 25 11:33 UTC │
	│ addons  │ addons-214441 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                               │ addons-214441        │ jenkins │ v1.37.0 │ 29 Sep 25 11:39 UTC │ 29 Sep 25 11:39 UTC │
	│ addons  │ addons-214441 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-214441        │ jenkins │ v1.37.0 │ 29 Sep 25 11:39 UTC │ 29 Sep 25 11:39 UTC │
	│ addons  │ enable headlamp -p addons-214441 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                   │ addons-214441        │ jenkins │ v1.37.0 │ 29 Sep 25 11:39 UTC │ 29 Sep 25 11:39 UTC │
	│ addons  │ addons-214441 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-214441        │ jenkins │ v1.37.0 │ 29 Sep 25 11:39 UTC │ 29 Sep 25 11:39 UTC │
	│ addons  │ addons-214441 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-214441        │ jenkins │ v1.37.0 │ 29 Sep 25 11:40 UTC │ 29 Sep 25 11:40 UTC │
	│ addons  │ addons-214441 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-214441        │ jenkins │ v1.37.0 │ 29 Sep 25 11:40 UTC │ 29 Sep 25 11:40 UTC │
	│ ip      │ addons-214441 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                          │ addons-214441        │ jenkins │ v1.37.0 │ 29 Sep 25 11:40 UTC │ 29 Sep 25 11:40 UTC │
	│ addons  │ addons-214441 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-214441        │ jenkins │ v1.37.0 │ 29 Sep 25 11:40 UTC │ 29 Sep 25 11:40 UTC │
	│ addons  │ addons-214441 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-214441        │ jenkins │ v1.37.0 │ 29 Sep 25 11:40 UTC │ 29 Sep 25 11:40 UTC │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-214441                                                                                                                                                                                                                                                                                                                                                                                            │ addons-214441        │ jenkins │ v1.37.0 │ 29 Sep 25 11:40 UTC │ 29 Sep 25 11:40 UTC │
	│ addons  │ addons-214441 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-214441        │ jenkins │ v1.37.0 │ 29 Sep 25 11:40 UTC │ 29 Sep 25 11:40 UTC │
	│ addons  │ addons-214441 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-214441        │ jenkins │ v1.37.0 │ 29 Sep 25 11:42 UTC │ 29 Sep 25 11:42 UTC │
	│ addons  │ addons-214441 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-214441        │ jenkins │ v1.37.0 │ 29 Sep 25 11:42 UTC │ 29 Sep 25 11:42 UTC │
	│ addons  │ addons-214441 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                           │ addons-214441        │ jenkins │ v1.37.0 │ 29 Sep 25 11:45 UTC │ 29 Sep 25 11:45 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/29 11:30:26
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0929 11:30:26.464374  595895 out.go:360] Setting OutFile to fd 1 ...
	I0929 11:30:26.464481  595895 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 11:30:26.464487  595895 out.go:374] Setting ErrFile to fd 2...
	I0929 11:30:26.464493  595895 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 11:30:26.464787  595895 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21654-591397/.minikube/bin
	I0929 11:30:26.465454  595895 out.go:368] Setting JSON to false
	I0929 11:30:26.466447  595895 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":4374,"bootTime":1759141052,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0929 11:30:26.466553  595895 start.go:140] virtualization: kvm guest
	I0929 11:30:26.468688  595895 out.go:179] * [addons-214441] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0929 11:30:26.470181  595895 out.go:179]   - MINIKUBE_LOCATION=21654
	I0929 11:30:26.470220  595895 notify.go:220] Checking for updates...
	I0929 11:30:26.473145  595895 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0929 11:30:26.474634  595895 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21654-591397/kubeconfig
	I0929 11:30:26.475793  595895 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21654-591397/.minikube
	I0929 11:30:26.477353  595895 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0929 11:30:26.478534  595895 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0929 11:30:26.479959  595895 driver.go:421] Setting default libvirt URI to qemu:///system
	I0929 11:30:26.513451  595895 out.go:179] * Using the kvm2 driver based on user configuration
	I0929 11:30:26.514622  595895 start.go:304] selected driver: kvm2
	I0929 11:30:26.514644  595895 start.go:924] validating driver "kvm2" against <nil>
	I0929 11:30:26.514659  595895 start.go:935] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0929 11:30:26.515675  595895 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 11:30:26.515785  595895 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21654-591397/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0929 11:30:26.530531  595895 install.go:163] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I0929 11:30:26.530568  595895 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21654-591397/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0929 11:30:26.545187  595895 install.go:163] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I0929 11:30:26.545244  595895 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0929 11:30:26.545491  595895 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0929 11:30:26.545527  595895 cni.go:84] Creating CNI manager for ""
	I0929 11:30:26.545570  595895 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0929 11:30:26.545579  595895 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0929 11:30:26.545628  595895 start.go:348] cluster config:
	{Name:addons-214441 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-214441 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: Network
Plugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0
s}
	I0929 11:30:26.545714  595895 iso.go:125] acquiring lock: {Name:mk3bf2644aacab696b9f4d566a6d81a30d75b71a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 11:30:26.547400  595895 out.go:179] * Starting "addons-214441" primary control-plane node in "addons-214441" cluster
	I0929 11:30:26.548855  595895 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0929 11:30:26.548909  595895 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21654-591397/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4
	I0929 11:30:26.548918  595895 cache.go:58] Caching tarball of preloaded images
	I0929 11:30:26.549035  595895 preload.go:172] Found /home/jenkins/minikube-integration/21654-591397/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0929 11:30:26.549046  595895 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on docker
	I0929 11:30:26.549389  595895 profile.go:143] Saving config to /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/addons-214441/config.json ...
	I0929 11:30:26.549415  595895 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/addons-214441/config.json: {Name:mka28e9e486990f30eb3eb321797c26d13a435f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 11:30:26.549559  595895 start.go:360] acquireMachinesLock for addons-214441: {Name:mka3370f06ebed6e47b43729e748683065f344f5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0929 11:30:26.549614  595895 start.go:364] duration metric: took 40.43µs to acquireMachinesLock for "addons-214441"
	I0929 11:30:26.549633  595895 start.go:93] Provisioning new machine with config: &{Name:addons-214441 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 Clu
sterName:addons-214441 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizatio
ns:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0929 11:30:26.549681  595895 start.go:125] createHost starting for "" (driver="kvm2")
	I0929 11:30:26.551210  595895 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I0929 11:30:26.551360  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:30:26.551417  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:30:26.564991  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37291
	I0929 11:30:26.565640  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:30:26.566242  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:30:26.566262  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:30:26.566742  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:30:26.566933  595895 main.go:141] libmachine: (addons-214441) Calling .GetMachineName
	I0929 11:30:26.567150  595895 main.go:141] libmachine: (addons-214441) Calling .DriverName
	I0929 11:30:26.567316  595895 start.go:159] libmachine.API.Create for "addons-214441" (driver="kvm2")
	I0929 11:30:26.567351  595895 client.go:168] LocalClient.Create starting
	I0929 11:30:26.567402  595895 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21654-591397/.minikube/certs/ca.pem
	I0929 11:30:26.955780  595895 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21654-591397/.minikube/certs/cert.pem
	I0929 11:30:27.214636  595895 main.go:141] libmachine: Running pre-create checks...
	I0929 11:30:27.214665  595895 main.go:141] libmachine: (addons-214441) Calling .PreCreateCheck
	I0929 11:30:27.215304  595895 main.go:141] libmachine: (addons-214441) Calling .GetConfigRaw
	I0929 11:30:27.215869  595895 main.go:141] libmachine: Creating machine...
	I0929 11:30:27.215887  595895 main.go:141] libmachine: (addons-214441) Calling .Create
	I0929 11:30:27.216119  595895 main.go:141] libmachine: (addons-214441) creating domain...
	I0929 11:30:27.216141  595895 main.go:141] libmachine: (addons-214441) creating network...
	I0929 11:30:27.217698  595895 main.go:141] libmachine: (addons-214441) DBG | found existing default network
	I0929 11:30:27.217987  595895 main.go:141] libmachine: (addons-214441) DBG | <network>
	I0929 11:30:27.218041  595895 main.go:141] libmachine: (addons-214441) DBG |   <name>default</name>
	I0929 11:30:27.218077  595895 main.go:141] libmachine: (addons-214441) DBG |   <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	I0929 11:30:27.218099  595895 main.go:141] libmachine: (addons-214441) DBG |   <forward mode='nat'>
	I0929 11:30:27.218124  595895 main.go:141] libmachine: (addons-214441) DBG |     <nat>
	I0929 11:30:27.218134  595895 main.go:141] libmachine: (addons-214441) DBG |       <port start='1024' end='65535'/>
	I0929 11:30:27.218144  595895 main.go:141] libmachine: (addons-214441) DBG |     </nat>
	I0929 11:30:27.218151  595895 main.go:141] libmachine: (addons-214441) DBG |   </forward>
	I0929 11:30:27.218161  595895 main.go:141] libmachine: (addons-214441) DBG |   <bridge name='virbr0' stp='on' delay='0'/>
	I0929 11:30:27.218190  595895 main.go:141] libmachine: (addons-214441) DBG |   <mac address='52:54:00:10:a2:1d'/>
	I0929 11:30:27.218203  595895 main.go:141] libmachine: (addons-214441) DBG |   <ip address='192.168.122.1' netmask='255.255.255.0'>
	I0929 11:30:27.218212  595895 main.go:141] libmachine: (addons-214441) DBG |     <dhcp>
	I0929 11:30:27.218222  595895 main.go:141] libmachine: (addons-214441) DBG |       <range start='192.168.122.2' end='192.168.122.254'/>
	I0929 11:30:27.218232  595895 main.go:141] libmachine: (addons-214441) DBG |     </dhcp>
	I0929 11:30:27.218245  595895 main.go:141] libmachine: (addons-214441) DBG |   </ip>
	I0929 11:30:27.218256  595895 main.go:141] libmachine: (addons-214441) DBG | </network>
	I0929 11:30:27.218263  595895 main.go:141] libmachine: (addons-214441) DBG | 
	I0929 11:30:27.219018  595895 main.go:141] libmachine: (addons-214441) DBG | I0929 11:30:27.218796  595923 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000200f10}
	I0929 11:30:27.219127  595895 main.go:141] libmachine: (addons-214441) DBG | defining private network:
	I0929 11:30:27.219156  595895 main.go:141] libmachine: (addons-214441) DBG | 
	I0929 11:30:27.219168  595895 main.go:141] libmachine: (addons-214441) DBG | <network>
	I0929 11:30:27.219179  595895 main.go:141] libmachine: (addons-214441) DBG |   <name>mk-addons-214441</name>
	I0929 11:30:27.219187  595895 main.go:141] libmachine: (addons-214441) DBG |   <dns enable='no'/>
	I0929 11:30:27.219194  595895 main.go:141] libmachine: (addons-214441) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0929 11:30:27.219200  595895 main.go:141] libmachine: (addons-214441) DBG |     <dhcp>
	I0929 11:30:27.219208  595895 main.go:141] libmachine: (addons-214441) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0929 11:30:27.219214  595895 main.go:141] libmachine: (addons-214441) DBG |     </dhcp>
	I0929 11:30:27.219218  595895 main.go:141] libmachine: (addons-214441) DBG |   </ip>
	I0929 11:30:27.219224  595895 main.go:141] libmachine: (addons-214441) DBG | </network>
	I0929 11:30:27.219227  595895 main.go:141] libmachine: (addons-214441) DBG | 
	I0929 11:30:27.225021  595895 main.go:141] libmachine: (addons-214441) DBG | creating private network mk-addons-214441 192.168.39.0/24...
	I0929 11:30:27.300287  595895 main.go:141] libmachine: (addons-214441) DBG | private network mk-addons-214441 192.168.39.0/24 created
	I0929 11:30:27.300635  595895 main.go:141] libmachine: (addons-214441) DBG | <network>
	I0929 11:30:27.300651  595895 main.go:141] libmachine: (addons-214441) DBG |   <name>mk-addons-214441</name>
	I0929 11:30:27.300675  595895 main.go:141] libmachine: (addons-214441) setting up store path in /home/jenkins/minikube-integration/21654-591397/.minikube/machines/addons-214441 ...
	I0929 11:30:27.300695  595895 main.go:141] libmachine: (addons-214441) building disk image from file:///home/jenkins/minikube-integration/21654-591397/.minikube/cache/iso/amd64/minikube-v1.37.0-1758198818-20370-amd64.iso
	I0929 11:30:27.300713  595895 main.go:141] libmachine: (addons-214441) DBG |   <uuid>9d6191f7-7df6-4691-bff3-3dbacc8ac925</uuid>
	I0929 11:30:27.300719  595895 main.go:141] libmachine: (addons-214441) DBG |   <bridge name='virbr1' stp='on' delay='0'/>
	I0929 11:30:27.300726  595895 main.go:141] libmachine: (addons-214441) DBG |   <mac address='52:54:00:ff:bc:22'/>
	I0929 11:30:27.300730  595895 main.go:141] libmachine: (addons-214441) DBG |   <dns enable='no'/>
	I0929 11:30:27.300736  595895 main.go:141] libmachine: (addons-214441) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0929 11:30:27.300741  595895 main.go:141] libmachine: (addons-214441) DBG |     <dhcp>
	I0929 11:30:27.300747  595895 main.go:141] libmachine: (addons-214441) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0929 11:30:27.300754  595895 main.go:141] libmachine: (addons-214441) DBG |     </dhcp>
	I0929 11:30:27.300758  595895 main.go:141] libmachine: (addons-214441) DBG |   </ip>
	I0929 11:30:27.300763  595895 main.go:141] libmachine: (addons-214441) DBG | </network>
	I0929 11:30:27.300770  595895 main.go:141] libmachine: (addons-214441) DBG | 
	I0929 11:30:27.300780  595895 main.go:141] libmachine: (addons-214441) DBG | I0929 11:30:27.300615  595923 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/21654-591397/.minikube
	I0929 11:30:27.300970  595895 main.go:141] libmachine: (addons-214441) Downloading /home/jenkins/minikube-integration/21654-591397/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/21654-591397/.minikube/cache/iso/amd64/minikube-v1.37.0-1758198818-20370-amd64.iso...
	I0929 11:30:27.567829  595895 main.go:141] libmachine: (addons-214441) DBG | I0929 11:30:27.567633  595923 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/21654-591397/.minikube/machines/addons-214441/id_rsa...
	I0929 11:30:27.812384  595895 main.go:141] libmachine: (addons-214441) DBG | I0929 11:30:27.812174  595923 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/21654-591397/.minikube/machines/addons-214441/addons-214441.rawdisk...
	I0929 11:30:27.812428  595895 main.go:141] libmachine: (addons-214441) DBG | Writing magic tar header
	I0929 11:30:27.812454  595895 main.go:141] libmachine: (addons-214441) DBG | Writing SSH key tar header
	I0929 11:30:27.812465  595895 main.go:141] libmachine: (addons-214441) DBG | I0929 11:30:27.812330  595923 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/21654-591397/.minikube/machines/addons-214441 ...
	I0929 11:30:27.812480  595895 main.go:141] libmachine: (addons-214441) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21654-591397/.minikube/machines/addons-214441
	I0929 11:30:27.812548  595895 main.go:141] libmachine: (addons-214441) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21654-591397/.minikube/machines
	I0929 11:30:27.812584  595895 main.go:141] libmachine: (addons-214441) setting executable bit set on /home/jenkins/minikube-integration/21654-591397/.minikube/machines/addons-214441 (perms=drwx------)
	I0929 11:30:27.812594  595895 main.go:141] libmachine: (addons-214441) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21654-591397/.minikube
	I0929 11:30:27.812609  595895 main.go:141] libmachine: (addons-214441) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21654-591397
	I0929 11:30:27.812617  595895 main.go:141] libmachine: (addons-214441) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0929 11:30:27.812625  595895 main.go:141] libmachine: (addons-214441) DBG | checking permissions on dir: /home/jenkins
	I0929 11:30:27.812632  595895 main.go:141] libmachine: (addons-214441) DBG | checking permissions on dir: /home
	I0929 11:30:27.812642  595895 main.go:141] libmachine: (addons-214441) DBG | skipping /home - not owner
	I0929 11:30:27.812734  595895 main.go:141] libmachine: (addons-214441) setting executable bit set on /home/jenkins/minikube-integration/21654-591397/.minikube/machines (perms=drwxr-xr-x)
	I0929 11:30:27.812784  595895 main.go:141] libmachine: (addons-214441) setting executable bit set on /home/jenkins/minikube-integration/21654-591397/.minikube (perms=drwxr-xr-x)
	I0929 11:30:27.812829  595895 main.go:141] libmachine: (addons-214441) setting executable bit set on /home/jenkins/minikube-integration/21654-591397 (perms=drwxrwxr-x)
	I0929 11:30:27.812851  595895 main.go:141] libmachine: (addons-214441) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0929 11:30:27.812866  595895 main.go:141] libmachine: (addons-214441) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0929 11:30:27.812895  595895 main.go:141] libmachine: (addons-214441) defining domain...
	I0929 11:30:27.814169  595895 main.go:141] libmachine: (addons-214441) defining domain using XML: 
	I0929 11:30:27.814189  595895 main.go:141] libmachine: (addons-214441) <domain type='kvm'>
	I0929 11:30:27.814197  595895 main.go:141] libmachine: (addons-214441)   <name>addons-214441</name>
	I0929 11:30:27.814204  595895 main.go:141] libmachine: (addons-214441)   <memory unit='MiB'>4096</memory>
	I0929 11:30:27.814211  595895 main.go:141] libmachine: (addons-214441)   <vcpu>2</vcpu>
	I0929 11:30:27.814217  595895 main.go:141] libmachine: (addons-214441)   <features>
	I0929 11:30:27.814224  595895 main.go:141] libmachine: (addons-214441)     <acpi/>
	I0929 11:30:27.814236  595895 main.go:141] libmachine: (addons-214441)     <apic/>
	I0929 11:30:27.814260  595895 main.go:141] libmachine: (addons-214441)     <pae/>
	I0929 11:30:27.814274  595895 main.go:141] libmachine: (addons-214441)   </features>
	I0929 11:30:27.814283  595895 main.go:141] libmachine: (addons-214441)   <cpu mode='host-passthrough'>
	I0929 11:30:27.814290  595895 main.go:141] libmachine: (addons-214441)   </cpu>
	I0929 11:30:27.814300  595895 main.go:141] libmachine: (addons-214441)   <os>
	I0929 11:30:27.814310  595895 main.go:141] libmachine: (addons-214441)     <type>hvm</type>
	I0929 11:30:27.814319  595895 main.go:141] libmachine: (addons-214441)     <boot dev='cdrom'/>
	I0929 11:30:27.814323  595895 main.go:141] libmachine: (addons-214441)     <boot dev='hd'/>
	I0929 11:30:27.814331  595895 main.go:141] libmachine: (addons-214441)     <bootmenu enable='no'/>
	I0929 11:30:27.814337  595895 main.go:141] libmachine: (addons-214441)   </os>
	I0929 11:30:27.814342  595895 main.go:141] libmachine: (addons-214441)   <devices>
	I0929 11:30:27.814352  595895 main.go:141] libmachine: (addons-214441)     <disk type='file' device='cdrom'>
	I0929 11:30:27.814381  595895 main.go:141] libmachine: (addons-214441)       <source file='/home/jenkins/minikube-integration/21654-591397/.minikube/machines/addons-214441/boot2docker.iso'/>
	I0929 11:30:27.814393  595895 main.go:141] libmachine: (addons-214441)       <target dev='hdc' bus='scsi'/>
	I0929 11:30:27.814438  595895 main.go:141] libmachine: (addons-214441)       <readonly/>
	I0929 11:30:27.814469  595895 main.go:141] libmachine: (addons-214441)     </disk>
	I0929 11:30:27.814485  595895 main.go:141] libmachine: (addons-214441)     <disk type='file' device='disk'>
	I0929 11:30:27.814501  595895 main.go:141] libmachine: (addons-214441)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0929 11:30:27.814519  595895 main.go:141] libmachine: (addons-214441)       <source file='/home/jenkins/minikube-integration/21654-591397/.minikube/machines/addons-214441/addons-214441.rawdisk'/>
	I0929 11:30:27.814537  595895 main.go:141] libmachine: (addons-214441)       <target dev='hda' bus='virtio'/>
	I0929 11:30:27.814551  595895 main.go:141] libmachine: (addons-214441)     </disk>
	I0929 11:30:27.814564  595895 main.go:141] libmachine: (addons-214441)     <interface type='network'>
	I0929 11:30:27.814577  595895 main.go:141] libmachine: (addons-214441)       <source network='mk-addons-214441'/>
	I0929 11:30:27.814587  595895 main.go:141] libmachine: (addons-214441)       <model type='virtio'/>
	I0929 11:30:27.814598  595895 main.go:141] libmachine: (addons-214441)     </interface>
	I0929 11:30:27.814608  595895 main.go:141] libmachine: (addons-214441)     <interface type='network'>
	I0929 11:30:27.814616  595895 main.go:141] libmachine: (addons-214441)       <source network='default'/>
	I0929 11:30:27.814644  595895 main.go:141] libmachine: (addons-214441)       <model type='virtio'/>
	I0929 11:30:27.814658  595895 main.go:141] libmachine: (addons-214441)     </interface>
	I0929 11:30:27.814670  595895 main.go:141] libmachine: (addons-214441)     <serial type='pty'>
	I0929 11:30:27.814681  595895 main.go:141] libmachine: (addons-214441)       <target port='0'/>
	I0929 11:30:27.814692  595895 main.go:141] libmachine: (addons-214441)     </serial>
	I0929 11:30:27.814707  595895 main.go:141] libmachine: (addons-214441)     <console type='pty'>
	I0929 11:30:27.814717  595895 main.go:141] libmachine: (addons-214441)       <target type='serial' port='0'/>
	I0929 11:30:27.814725  595895 main.go:141] libmachine: (addons-214441)     </console>
	I0929 11:30:27.814732  595895 main.go:141] libmachine: (addons-214441)     <rng model='virtio'>
	I0929 11:30:27.814741  595895 main.go:141] libmachine: (addons-214441)       <backend model='random'>/dev/random</backend>
	I0929 11:30:27.814750  595895 main.go:141] libmachine: (addons-214441)     </rng>
	I0929 11:30:27.814759  595895 main.go:141] libmachine: (addons-214441)   </devices>
	I0929 11:30:27.814768  595895 main.go:141] libmachine: (addons-214441) </domain>
	I0929 11:30:27.814781  595895 main.go:141] libmachine: (addons-214441) 
	I0929 11:30:27.822484  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:b8:70:d1 in network default
	I0929 11:30:27.823310  595895 main.go:141] libmachine: (addons-214441) starting domain...
	I0929 11:30:27.823336  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:27.823353  595895 main.go:141] libmachine: (addons-214441) ensuring networks are active...
	I0929 11:30:27.824165  595895 main.go:141] libmachine: (addons-214441) Ensuring network default is active
	I0929 11:30:27.824600  595895 main.go:141] libmachine: (addons-214441) Ensuring network mk-addons-214441 is active
	I0929 11:30:27.825327  595895 main.go:141] libmachine: (addons-214441) getting domain XML...
	I0929 11:30:27.826485  595895 main.go:141] libmachine: (addons-214441) DBG | starting domain XML:
	I0929 11:30:27.826497  595895 main.go:141] libmachine: (addons-214441) DBG | <domain type='kvm'>
	I0929 11:30:27.826534  595895 main.go:141] libmachine: (addons-214441) DBG |   <name>addons-214441</name>
	I0929 11:30:27.826556  595895 main.go:141] libmachine: (addons-214441) DBG |   <uuid>44179717-3988-47cd-b8d8-61dffe58e059</uuid>
	I0929 11:30:27.826564  595895 main.go:141] libmachine: (addons-214441) DBG |   <memory unit='KiB'>4194304</memory>
	I0929 11:30:27.826573  595895 main.go:141] libmachine: (addons-214441) DBG |   <currentMemory unit='KiB'>4194304</currentMemory>
	I0929 11:30:27.826583  595895 main.go:141] libmachine: (addons-214441) DBG |   <vcpu placement='static'>2</vcpu>
	I0929 11:30:27.826594  595895 main.go:141] libmachine: (addons-214441) DBG |   <os>
	I0929 11:30:27.826603  595895 main.go:141] libmachine: (addons-214441) DBG |     <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	I0929 11:30:27.826611  595895 main.go:141] libmachine: (addons-214441) DBG |     <boot dev='cdrom'/>
	I0929 11:30:27.826619  595895 main.go:141] libmachine: (addons-214441) DBG |     <boot dev='hd'/>
	I0929 11:30:27.826627  595895 main.go:141] libmachine: (addons-214441) DBG |     <bootmenu enable='no'/>
	I0929 11:30:27.826636  595895 main.go:141] libmachine: (addons-214441) DBG |   </os>
	I0929 11:30:27.826643  595895 main.go:141] libmachine: (addons-214441) DBG |   <features>
	I0929 11:30:27.826652  595895 main.go:141] libmachine: (addons-214441) DBG |     <acpi/>
	I0929 11:30:27.826658  595895 main.go:141] libmachine: (addons-214441) DBG |     <apic/>
	I0929 11:30:27.826666  595895 main.go:141] libmachine: (addons-214441) DBG |     <pae/>
	I0929 11:30:27.826670  595895 main.go:141] libmachine: (addons-214441) DBG |   </features>
	I0929 11:30:27.826676  595895 main.go:141] libmachine: (addons-214441) DBG |   <cpu mode='host-passthrough' check='none' migratable='on'/>
	I0929 11:30:27.826680  595895 main.go:141] libmachine: (addons-214441) DBG |   <clock offset='utc'/>
	I0929 11:30:27.826712  595895 main.go:141] libmachine: (addons-214441) DBG |   <on_poweroff>destroy</on_poweroff>
	I0929 11:30:27.826730  595895 main.go:141] libmachine: (addons-214441) DBG |   <on_reboot>restart</on_reboot>
	I0929 11:30:27.826740  595895 main.go:141] libmachine: (addons-214441) DBG |   <on_crash>destroy</on_crash>
	I0929 11:30:27.826748  595895 main.go:141] libmachine: (addons-214441) DBG |   <devices>
	I0929 11:30:27.826760  595895 main.go:141] libmachine: (addons-214441) DBG |     <emulator>/usr/bin/qemu-system-x86_64</emulator>
	I0929 11:30:27.826771  595895 main.go:141] libmachine: (addons-214441) DBG |     <disk type='file' device='cdrom'>
	I0929 11:30:27.826782  595895 main.go:141] libmachine: (addons-214441) DBG |       <driver name='qemu' type='raw'/>
	I0929 11:30:27.826804  595895 main.go:141] libmachine: (addons-214441) DBG |       <source file='/home/jenkins/minikube-integration/21654-591397/.minikube/machines/addons-214441/boot2docker.iso'/>
	I0929 11:30:27.826817  595895 main.go:141] libmachine: (addons-214441) DBG |       <target dev='hdc' bus='scsi'/>
	I0929 11:30:27.826828  595895 main.go:141] libmachine: (addons-214441) DBG |       <readonly/>
	I0929 11:30:27.826842  595895 main.go:141] libmachine: (addons-214441) DBG |       <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	I0929 11:30:27.826853  595895 main.go:141] libmachine: (addons-214441) DBG |     </disk>
	I0929 11:30:27.826863  595895 main.go:141] libmachine: (addons-214441) DBG |     <disk type='file' device='disk'>
	I0929 11:30:27.826884  595895 main.go:141] libmachine: (addons-214441) DBG |       <driver name='qemu' type='raw' io='threads'/>
	I0929 11:30:27.826906  595895 main.go:141] libmachine: (addons-214441) DBG |       <source file='/home/jenkins/minikube-integration/21654-591397/.minikube/machines/addons-214441/addons-214441.rawdisk'/>
	I0929 11:30:27.826922  595895 main.go:141] libmachine: (addons-214441) DBG |       <target dev='hda' bus='virtio'/>
	I0929 11:30:27.826937  595895 main.go:141] libmachine: (addons-214441) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	I0929 11:30:27.826947  595895 main.go:141] libmachine: (addons-214441) DBG |     </disk>
	I0929 11:30:27.826959  595895 main.go:141] libmachine: (addons-214441) DBG |     <controller type='usb' index='0' model='piix3-uhci'>
	I0929 11:30:27.826972  595895 main.go:141] libmachine: (addons-214441) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	I0929 11:30:27.826984  595895 main.go:141] libmachine: (addons-214441) DBG |     </controller>
	I0929 11:30:27.827000  595895 main.go:141] libmachine: (addons-214441) DBG |     <controller type='pci' index='0' model='pci-root'/>
	I0929 11:30:27.827014  595895 main.go:141] libmachine: (addons-214441) DBG |     <controller type='scsi' index='0' model='lsilogic'>
	I0929 11:30:27.827028  595895 main.go:141] libmachine: (addons-214441) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	I0929 11:30:27.827039  595895 main.go:141] libmachine: (addons-214441) DBG |     </controller>
	I0929 11:30:27.827046  595895 main.go:141] libmachine: (addons-214441) DBG |     <interface type='network'>
	I0929 11:30:27.827053  595895 main.go:141] libmachine: (addons-214441) DBG |       <mac address='52:54:00:98:9c:d8'/>
	I0929 11:30:27.827060  595895 main.go:141] libmachine: (addons-214441) DBG |       <source network='mk-addons-214441'/>
	I0929 11:30:27.827087  595895 main.go:141] libmachine: (addons-214441) DBG |       <model type='virtio'/>
	I0929 11:30:27.827120  595895 main.go:141] libmachine: (addons-214441) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	I0929 11:30:27.827133  595895 main.go:141] libmachine: (addons-214441) DBG |     </interface>
	I0929 11:30:27.827141  595895 main.go:141] libmachine: (addons-214441) DBG |     <interface type='network'>
	I0929 11:30:27.827146  595895 main.go:141] libmachine: (addons-214441) DBG |       <mac address='52:54:00:b8:70:d1'/>
	I0929 11:30:27.827154  595895 main.go:141] libmachine: (addons-214441) DBG |       <source network='default'/>
	I0929 11:30:27.827172  595895 main.go:141] libmachine: (addons-214441) DBG |       <model type='virtio'/>
	I0929 11:30:27.827197  595895 main.go:141] libmachine: (addons-214441) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	I0929 11:30:27.827208  595895 main.go:141] libmachine: (addons-214441) DBG |     </interface>
	I0929 11:30:27.827218  595895 main.go:141] libmachine: (addons-214441) DBG |     <serial type='pty'>
	I0929 11:30:27.827232  595895 main.go:141] libmachine: (addons-214441) DBG |       <target type='isa-serial' port='0'>
	I0929 11:30:27.827252  595895 main.go:141] libmachine: (addons-214441) DBG |         <model name='isa-serial'/>
	I0929 11:30:27.827267  595895 main.go:141] libmachine: (addons-214441) DBG |       </target>
	I0929 11:30:27.827295  595895 main.go:141] libmachine: (addons-214441) DBG |     </serial>
	I0929 11:30:27.827306  595895 main.go:141] libmachine: (addons-214441) DBG |     <console type='pty'>
	I0929 11:30:27.827316  595895 main.go:141] libmachine: (addons-214441) DBG |       <target type='serial' port='0'/>
	I0929 11:30:27.827327  595895 main.go:141] libmachine: (addons-214441) DBG |     </console>
	I0929 11:30:27.827337  595895 main.go:141] libmachine: (addons-214441) DBG |     <input type='mouse' bus='ps2'/>
	I0929 11:30:27.827353  595895 main.go:141] libmachine: (addons-214441) DBG |     <input type='keyboard' bus='ps2'/>
	I0929 11:30:27.827365  595895 main.go:141] libmachine: (addons-214441) DBG |     <audio id='1' type='none'/>
	I0929 11:30:27.827381  595895 main.go:141] libmachine: (addons-214441) DBG |     <memballoon model='virtio'>
	I0929 11:30:27.827396  595895 main.go:141] libmachine: (addons-214441) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	I0929 11:30:27.827407  595895 main.go:141] libmachine: (addons-214441) DBG |     </memballoon>
	I0929 11:30:27.827416  595895 main.go:141] libmachine: (addons-214441) DBG |     <rng model='virtio'>
	I0929 11:30:27.827462  595895 main.go:141] libmachine: (addons-214441) DBG |       <backend model='random'>/dev/random</backend>
	I0929 11:30:27.827477  595895 main.go:141] libmachine: (addons-214441) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	I0929 11:30:27.827484  595895 main.go:141] libmachine: (addons-214441) DBG |     </rng>
	I0929 11:30:27.827492  595895 main.go:141] libmachine: (addons-214441) DBG |   </devices>
	I0929 11:30:27.827507  595895 main.go:141] libmachine: (addons-214441) DBG | </domain>
	I0929 11:30:27.827523  595895 main.go:141] libmachine: (addons-214441) DBG | 
	I0929 11:30:29.153785  595895 main.go:141] libmachine: (addons-214441) waiting for domain to start...
	I0929 11:30:29.155338  595895 main.go:141] libmachine: (addons-214441) domain is now running
	I0929 11:30:29.155366  595895 main.go:141] libmachine: (addons-214441) waiting for IP...
	I0929 11:30:29.156233  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:29.156741  595895 main.go:141] libmachine: (addons-214441) DBG | no network interface addresses found for domain addons-214441 (source=lease)
	I0929 11:30:29.156768  595895 main.go:141] libmachine: (addons-214441) DBG | trying to list again with source=arp
	I0929 11:30:29.157097  595895 main.go:141] libmachine: (addons-214441) DBG | unable to find current IP address of domain addons-214441 in network mk-addons-214441 (interfaces detected: [])
	I0929 11:30:29.157173  595895 main.go:141] libmachine: (addons-214441) DBG | I0929 11:30:29.157084  595923 retry.go:31] will retry after 193.130078ms: waiting for domain to come up
	I0929 11:30:29.351641  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:29.352088  595895 main.go:141] libmachine: (addons-214441) DBG | no network interface addresses found for domain addons-214441 (source=lease)
	I0929 11:30:29.352131  595895 main.go:141] libmachine: (addons-214441) DBG | trying to list again with source=arp
	I0929 11:30:29.352401  595895 main.go:141] libmachine: (addons-214441) DBG | unable to find current IP address of domain addons-214441 in network mk-addons-214441 (interfaces detected: [])
	I0929 11:30:29.352453  595895 main.go:141] libmachine: (addons-214441) DBG | I0929 11:30:29.352389  595923 retry.go:31] will retry after 298.936458ms: waiting for domain to come up
	I0929 11:30:29.653209  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:29.653776  595895 main.go:141] libmachine: (addons-214441) DBG | no network interface addresses found for domain addons-214441 (source=lease)
	I0929 11:30:29.653812  595895 main.go:141] libmachine: (addons-214441) DBG | trying to list again with source=arp
	I0929 11:30:29.654092  595895 main.go:141] libmachine: (addons-214441) DBG | unable to find current IP address of domain addons-214441 in network mk-addons-214441 (interfaces detected: [])
	I0929 11:30:29.654145  595895 main.go:141] libmachine: (addons-214441) DBG | I0929 11:30:29.654057  595923 retry.go:31] will retry after 319.170448ms: waiting for domain to come up
	I0929 11:30:29.974953  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:29.975656  595895 main.go:141] libmachine: (addons-214441) DBG | no network interface addresses found for domain addons-214441 (source=lease)
	I0929 11:30:29.975697  595895 main.go:141] libmachine: (addons-214441) DBG | trying to list again with source=arp
	I0929 11:30:29.976026  595895 main.go:141] libmachine: (addons-214441) DBG | unable to find current IP address of domain addons-214441 in network mk-addons-214441 (interfaces detected: [])
	I0929 11:30:29.976053  595895 main.go:141] libmachine: (addons-214441) DBG | I0929 11:30:29.976008  595923 retry.go:31] will retry after 599.248845ms: waiting for domain to come up
	I0929 11:30:30.576933  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:30.577607  595895 main.go:141] libmachine: (addons-214441) DBG | no network interface addresses found for domain addons-214441 (source=lease)
	I0929 11:30:30.577638  595895 main.go:141] libmachine: (addons-214441) DBG | trying to list again with source=arp
	I0929 11:30:30.577976  595895 main.go:141] libmachine: (addons-214441) DBG | unable to find current IP address of domain addons-214441 in network mk-addons-214441 (interfaces detected: [])
	I0929 11:30:30.578001  595895 main.go:141] libmachine: (addons-214441) DBG | I0929 11:30:30.577944  595923 retry.go:31] will retry after 506.439756ms: waiting for domain to come up
	I0929 11:30:31.085911  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:31.086486  595895 main.go:141] libmachine: (addons-214441) DBG | no network interface addresses found for domain addons-214441 (source=lease)
	I0929 11:30:31.086516  595895 main.go:141] libmachine: (addons-214441) DBG | trying to list again with source=arp
	I0929 11:30:31.086838  595895 main.go:141] libmachine: (addons-214441) DBG | unable to find current IP address of domain addons-214441 in network mk-addons-214441 (interfaces detected: [])
	I0929 11:30:31.086901  595895 main.go:141] libmachine: (addons-214441) DBG | I0929 11:30:31.086827  595923 retry.go:31] will retry after 714.950089ms: waiting for domain to come up
	I0929 11:30:31.803913  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:31.804432  595895 main.go:141] libmachine: (addons-214441) DBG | no network interface addresses found for domain addons-214441 (source=lease)
	I0929 11:30:31.804465  595895 main.go:141] libmachine: (addons-214441) DBG | trying to list again with source=arp
	I0929 11:30:31.804799  595895 main.go:141] libmachine: (addons-214441) DBG | unable to find current IP address of domain addons-214441 in network mk-addons-214441 (interfaces detected: [])
	I0929 11:30:31.804835  595895 main.go:141] libmachine: (addons-214441) DBG | I0929 11:30:31.804762  595923 retry.go:31] will retry after 948.596157ms: waiting for domain to come up
	I0929 11:30:32.755226  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:32.755814  595895 main.go:141] libmachine: (addons-214441) DBG | no network interface addresses found for domain addons-214441 (source=lease)
	I0929 11:30:32.755837  595895 main.go:141] libmachine: (addons-214441) DBG | trying to list again with source=arp
	I0929 11:30:32.756159  595895 main.go:141] libmachine: (addons-214441) DBG | unable to find current IP address of domain addons-214441 in network mk-addons-214441 (interfaces detected: [])
	I0929 11:30:32.756191  595895 main.go:141] libmachine: (addons-214441) DBG | I0929 11:30:32.756135  595923 retry.go:31] will retry after 1.377051804s: waiting for domain to come up
	I0929 11:30:34.136012  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:34.136582  595895 main.go:141] libmachine: (addons-214441) DBG | no network interface addresses found for domain addons-214441 (source=lease)
	I0929 11:30:34.136605  595895 main.go:141] libmachine: (addons-214441) DBG | trying to list again with source=arp
	I0929 11:30:34.136880  595895 main.go:141] libmachine: (addons-214441) DBG | unable to find current IP address of domain addons-214441 in network mk-addons-214441 (interfaces detected: [])
	I0929 11:30:34.136912  595895 main.go:141] libmachine: (addons-214441) DBG | I0929 11:30:34.136849  595923 retry.go:31] will retry after 1.34696154s: waiting for domain to come up
	I0929 11:30:35.485739  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:35.486269  595895 main.go:141] libmachine: (addons-214441) DBG | no network interface addresses found for domain addons-214441 (source=lease)
	I0929 11:30:35.486292  595895 main.go:141] libmachine: (addons-214441) DBG | trying to list again with source=arp
	I0929 11:30:35.486548  595895 main.go:141] libmachine: (addons-214441) DBG | unable to find current IP address of domain addons-214441 in network mk-addons-214441 (interfaces detected: [])
	I0929 11:30:35.486587  595895 main.go:141] libmachine: (addons-214441) DBG | I0929 11:30:35.486521  595923 retry.go:31] will retry after 1.574508192s: waiting for domain to come up
	I0929 11:30:37.063528  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:37.064142  595895 main.go:141] libmachine: (addons-214441) DBG | no network interface addresses found for domain addons-214441 (source=lease)
	I0929 11:30:37.064170  595895 main.go:141] libmachine: (addons-214441) DBG | trying to list again with source=arp
	I0929 11:30:37.064559  595895 main.go:141] libmachine: (addons-214441) DBG | unable to find current IP address of domain addons-214441 in network mk-addons-214441 (interfaces detected: [])
	I0929 11:30:37.064594  595895 main.go:141] libmachine: (addons-214441) DBG | I0929 11:30:37.064489  595923 retry.go:31] will retry after 2.067291223s: waiting for domain to come up
	I0929 11:30:39.135405  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:39.135998  595895 main.go:141] libmachine: (addons-214441) DBG | no network interface addresses found for domain addons-214441 (source=lease)
	I0929 11:30:39.136030  595895 main.go:141] libmachine: (addons-214441) DBG | trying to list again with source=arp
	I0929 11:30:39.136354  595895 main.go:141] libmachine: (addons-214441) DBG | unable to find current IP address of domain addons-214441 in network mk-addons-214441 (interfaces detected: [])
	I0929 11:30:39.136412  595895 main.go:141] libmachine: (addons-214441) DBG | I0929 11:30:39.136338  595923 retry.go:31] will retry after 3.104602856s: waiting for domain to come up
	I0929 11:30:42.242410  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:42.242939  595895 main.go:141] libmachine: (addons-214441) DBG | no network interface addresses found for domain addons-214441 (source=lease)
	I0929 11:30:42.242965  595895 main.go:141] libmachine: (addons-214441) DBG | trying to list again with source=arp
	I0929 11:30:42.243288  595895 main.go:141] libmachine: (addons-214441) DBG | unable to find current IP address of domain addons-214441 in network mk-addons-214441 (interfaces detected: [])
	I0929 11:30:42.243344  595895 main.go:141] libmachine: (addons-214441) DBG | I0929 11:30:42.243280  595923 retry.go:31] will retry after 4.150705767s: waiting for domain to come up
	I0929 11:30:46.398779  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:46.399347  595895 main.go:141] libmachine: (addons-214441) found domain IP: 192.168.39.76
	I0929 11:30:46.399374  595895 main.go:141] libmachine: (addons-214441) reserving static IP address...
	I0929 11:30:46.399388  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has current primary IP address 192.168.39.76 and MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:46.399901  595895 main.go:141] libmachine: (addons-214441) DBG | unable to find host DHCP lease matching {name: "addons-214441", mac: "52:54:00:98:9c:d8", ip: "192.168.39.76"} in network mk-addons-214441
	I0929 11:30:46.587177  595895 main.go:141] libmachine: (addons-214441) DBG | Getting to WaitForSSH function...
	I0929 11:30:46.587215  595895 main.go:141] libmachine: (addons-214441) reserved static IP address 192.168.39.76 for domain addons-214441
	I0929 11:30:46.587228  595895 main.go:141] libmachine: (addons-214441) waiting for SSH...
	I0929 11:30:46.590179  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:46.590588  595895 main.go:141] libmachine: (addons-214441) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:9c:d8", ip: ""} in network mk-addons-214441: {Iface:virbr1 ExpiryTime:2025-09-29 12:30:43 +0000 UTC Type:0 Mac:52:54:00:98:9c:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:minikube Clientid:01:52:54:00:98:9c:d8}
	I0929 11:30:46.590626  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined IP address 192.168.39.76 and MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:46.590750  595895 main.go:141] libmachine: (addons-214441) DBG | Using SSH client type: external
	I0929 11:30:46.590791  595895 main.go:141] libmachine: (addons-214441) DBG | Using SSH private key: /home/jenkins/minikube-integration/21654-591397/.minikube/machines/addons-214441/id_rsa (-rw-------)
	I0929 11:30:46.590840  595895 main.go:141] libmachine: (addons-214441) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.76 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21654-591397/.minikube/machines/addons-214441/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0929 11:30:46.590868  595895 main.go:141] libmachine: (addons-214441) DBG | About to run SSH command:
	I0929 11:30:46.590883  595895 main.go:141] libmachine: (addons-214441) DBG | exit 0
	I0929 11:30:46.729877  595895 main.go:141] libmachine: (addons-214441) DBG | SSH cmd err, output: <nil>: 
	I0929 11:30:46.730171  595895 main.go:141] libmachine: (addons-214441) domain creation complete
	I0929 11:30:46.730534  595895 main.go:141] libmachine: (addons-214441) Calling .GetConfigRaw
	I0929 11:30:46.731196  595895 main.go:141] libmachine: (addons-214441) Calling .DriverName
	I0929 11:30:46.731410  595895 main.go:141] libmachine: (addons-214441) Calling .DriverName
	I0929 11:30:46.731600  595895 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0929 11:30:46.731623  595895 main.go:141] libmachine: (addons-214441) Calling .GetState
	I0929 11:30:46.732882  595895 main.go:141] libmachine: Detecting operating system of created instance...
	I0929 11:30:46.732897  595895 main.go:141] libmachine: Waiting for SSH to be available...
	I0929 11:30:46.732902  595895 main.go:141] libmachine: Getting to WaitForSSH function...
	I0929 11:30:46.732908  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHHostname
	I0929 11:30:46.735685  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:46.736210  595895 main.go:141] libmachine: (addons-214441) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:9c:d8", ip: ""} in network mk-addons-214441: {Iface:virbr1 ExpiryTime:2025-09-29 12:30:43 +0000 UTC Type:0 Mac:52:54:00:98:9c:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:addons-214441 Clientid:01:52:54:00:98:9c:d8}
	I0929 11:30:46.736238  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined IP address 192.168.39.76 and MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:46.736397  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHPort
	I0929 11:30:46.736652  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHKeyPath
	I0929 11:30:46.736854  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHKeyPath
	I0929 11:30:46.736998  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHUsername
	I0929 11:30:46.737156  595895 main.go:141] libmachine: Using SSH client type: native
	I0929 11:30:46.737392  595895 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.39.76 22 <nil> <nil>}
	I0929 11:30:46.737403  595895 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0929 11:30:46.844278  595895 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0929 11:30:46.844312  595895 main.go:141] libmachine: Detecting the provisioner...
	I0929 11:30:46.844324  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHHostname
	I0929 11:30:46.848224  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:46.849228  595895 main.go:141] libmachine: (addons-214441) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:9c:d8", ip: ""} in network mk-addons-214441: {Iface:virbr1 ExpiryTime:2025-09-29 12:30:43 +0000 UTC Type:0 Mac:52:54:00:98:9c:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:addons-214441 Clientid:01:52:54:00:98:9c:d8}
	I0929 11:30:46.849264  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined IP address 192.168.39.76 and MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:46.849457  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHPort
	I0929 11:30:46.849706  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHKeyPath
	I0929 11:30:46.849884  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHKeyPath
	I0929 11:30:46.850038  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHUsername
	I0929 11:30:46.850227  595895 main.go:141] libmachine: Using SSH client type: native
	I0929 11:30:46.850481  595895 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.39.76 22 <nil> <nil>}
	I0929 11:30:46.850494  595895 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0929 11:30:46.959386  595895 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2025.02-dirty
	ID=buildroot
	VERSION_ID=2025.02
	PRETTY_NAME="Buildroot 2025.02"
	
	I0929 11:30:46.959537  595895 main.go:141] libmachine: found compatible host: buildroot
	I0929 11:30:46.959560  595895 main.go:141] libmachine: Provisioning with buildroot...
	I0929 11:30:46.959572  595895 main.go:141] libmachine: (addons-214441) Calling .GetMachineName
	I0929 11:30:46.959897  595895 buildroot.go:166] provisioning hostname "addons-214441"
	I0929 11:30:46.959920  595895 main.go:141] libmachine: (addons-214441) Calling .GetMachineName
	I0929 11:30:46.960158  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHHostname
	I0929 11:30:46.963429  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:46.963851  595895 main.go:141] libmachine: (addons-214441) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:9c:d8", ip: ""} in network mk-addons-214441: {Iface:virbr1 ExpiryTime:2025-09-29 12:30:43 +0000 UTC Type:0 Mac:52:54:00:98:9c:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:addons-214441 Clientid:01:52:54:00:98:9c:d8}
	I0929 11:30:46.963892  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined IP address 192.168.39.76 and MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:46.964187  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHPort
	I0929 11:30:46.964389  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHKeyPath
	I0929 11:30:46.964590  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHKeyPath
	I0929 11:30:46.964750  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHUsername
	I0929 11:30:46.964942  595895 main.go:141] libmachine: Using SSH client type: native
	I0929 11:30:46.965188  595895 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.39.76 22 <nil> <nil>}
	I0929 11:30:46.965202  595895 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-214441 && echo "addons-214441" | sudo tee /etc/hostname
	I0929 11:30:47.092132  595895 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-214441
	
	I0929 11:30:47.092159  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHHostname
	I0929 11:30:47.095605  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:47.096136  595895 main.go:141] libmachine: (addons-214441) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:9c:d8", ip: ""} in network mk-addons-214441: {Iface:virbr1 ExpiryTime:2025-09-29 12:30:43 +0000 UTC Type:0 Mac:52:54:00:98:9c:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:addons-214441 Clientid:01:52:54:00:98:9c:d8}
	I0929 11:30:47.096169  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined IP address 192.168.39.76 and MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:47.096340  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHPort
	I0929 11:30:47.096555  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHKeyPath
	I0929 11:30:47.096747  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHKeyPath
	I0929 11:30:47.096902  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHUsername
	I0929 11:30:47.097123  595895 main.go:141] libmachine: Using SSH client type: native
	I0929 11:30:47.097351  595895 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.39.76 22 <nil> <nil>}
	I0929 11:30:47.097369  595895 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-214441' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-214441/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-214441' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0929 11:30:47.216048  595895 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0929 11:30:47.216081  595895 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21654-591397/.minikube CaCertPath:/home/jenkins/minikube-integration/21654-591397/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21654-591397/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21654-591397/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21654-591397/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21654-591397/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21654-591397/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21654-591397/.minikube}
	I0929 11:30:47.216160  595895 buildroot.go:174] setting up certificates
	I0929 11:30:47.216176  595895 provision.go:84] configureAuth start
	I0929 11:30:47.216187  595895 main.go:141] libmachine: (addons-214441) Calling .GetMachineName
	I0929 11:30:47.216551  595895 main.go:141] libmachine: (addons-214441) Calling .GetIP
	I0929 11:30:47.219822  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:47.220206  595895 main.go:141] libmachine: (addons-214441) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:9c:d8", ip: ""} in network mk-addons-214441: {Iface:virbr1 ExpiryTime:2025-09-29 12:30:43 +0000 UTC Type:0 Mac:52:54:00:98:9c:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:addons-214441 Clientid:01:52:54:00:98:9c:d8}
	I0929 11:30:47.220241  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined IP address 192.168.39.76 and MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:47.220424  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHHostname
	I0929 11:30:47.222925  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:47.223320  595895 main.go:141] libmachine: (addons-214441) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:9c:d8", ip: ""} in network mk-addons-214441: {Iface:virbr1 ExpiryTime:2025-09-29 12:30:43 +0000 UTC Type:0 Mac:52:54:00:98:9c:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:addons-214441 Clientid:01:52:54:00:98:9c:d8}
	I0929 11:30:47.223351  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined IP address 192.168.39.76 and MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:47.223603  595895 provision.go:143] copyHostCerts
	I0929 11:30:47.223674  595895 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21654-591397/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21654-591397/.minikube/cert.pem (1123 bytes)
	I0929 11:30:47.223815  595895 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21654-591397/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21654-591397/.minikube/key.pem (1675 bytes)
	I0929 11:30:47.223908  595895 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21654-591397/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21654-591397/.minikube/ca.pem (1082 bytes)
	I0929 11:30:47.223987  595895 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21654-591397/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21654-591397/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21654-591397/.minikube/certs/ca-key.pem org=jenkins.addons-214441 san=[127.0.0.1 192.168.39.76 addons-214441 localhost minikube]
	I0929 11:30:47.541100  595895 provision.go:177] copyRemoteCerts
	I0929 11:30:47.541199  595895 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0929 11:30:47.541238  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHHostname
	I0929 11:30:47.544486  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:47.544940  595895 main.go:141] libmachine: (addons-214441) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:9c:d8", ip: ""} in network mk-addons-214441: {Iface:virbr1 ExpiryTime:2025-09-29 12:30:43 +0000 UTC Type:0 Mac:52:54:00:98:9c:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:addons-214441 Clientid:01:52:54:00:98:9c:d8}
	I0929 11:30:47.545024  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined IP address 192.168.39.76 and MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:47.545286  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHPort
	I0929 11:30:47.545574  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHKeyPath
	I0929 11:30:47.545766  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHUsername
	I0929 11:30:47.545940  595895 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21654-591397/.minikube/machines/addons-214441/id_rsa Username:docker}
	I0929 11:30:47.632441  595895 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21654-591397/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0929 11:30:47.665928  595895 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21654-591397/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0929 11:30:47.699464  595895 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21654-591397/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0929 11:30:47.731874  595895 provision.go:87] duration metric: took 515.680125ms to configureAuth
	I0929 11:30:47.731904  595895 buildroot.go:189] setting minikube options for container-runtime
	I0929 11:30:47.732120  595895 config.go:182] Loaded profile config "addons-214441": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0929 11:30:47.732187  595895 main.go:141] libmachine: (addons-214441) Calling .DriverName
	I0929 11:30:47.732484  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHHostname
	I0929 11:30:47.735606  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:47.736098  595895 main.go:141] libmachine: (addons-214441) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:9c:d8", ip: ""} in network mk-addons-214441: {Iface:virbr1 ExpiryTime:2025-09-29 12:30:43 +0000 UTC Type:0 Mac:52:54:00:98:9c:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:addons-214441 Clientid:01:52:54:00:98:9c:d8}
	I0929 11:30:47.736147  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined IP address 192.168.39.76 and MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:47.736408  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHPort
	I0929 11:30:47.736676  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHKeyPath
	I0929 11:30:47.736876  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHKeyPath
	I0929 11:30:47.737026  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHUsername
	I0929 11:30:47.737286  595895 main.go:141] libmachine: Using SSH client type: native
	I0929 11:30:47.737503  595895 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.39.76 22 <nil> <nil>}
	I0929 11:30:47.737522  595895 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0929 11:30:47.845243  595895 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0929 11:30:47.845278  595895 buildroot.go:70] root file system type: tmpfs
	I0929 11:30:47.845464  595895 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0929 11:30:47.845493  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHHostname
	I0929 11:30:47.848685  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:47.849080  595895 main.go:141] libmachine: (addons-214441) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:9c:d8", ip: ""} in network mk-addons-214441: {Iface:virbr1 ExpiryTime:2025-09-29 12:30:43 +0000 UTC Type:0 Mac:52:54:00:98:9c:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:addons-214441 Clientid:01:52:54:00:98:9c:d8}
	I0929 11:30:47.849125  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined IP address 192.168.39.76 and MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:47.849333  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHPort
	I0929 11:30:47.849561  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHKeyPath
	I0929 11:30:47.849749  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHKeyPath
	I0929 11:30:47.849921  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHUsername
	I0929 11:30:47.850156  595895 main.go:141] libmachine: Using SSH client type: native
	I0929 11:30:47.850438  595895 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.39.76 22 <nil> <nil>}
	I0929 11:30:47.850513  595895 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target minikube-automount.service nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	[Service]
	Type=notify
	Restart=always
	
	
	
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0929 11:30:47.980841  595895 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target minikube-automount.service nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	[Service]
	Type=notify
	Restart=always
	
	
	
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I0929 11:30:47.980885  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHHostname
	I0929 11:30:47.984021  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:47.984467  595895 main.go:141] libmachine: (addons-214441) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:9c:d8", ip: ""} in network mk-addons-214441: {Iface:virbr1 ExpiryTime:2025-09-29 12:30:43 +0000 UTC Type:0 Mac:52:54:00:98:9c:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:addons-214441 Clientid:01:52:54:00:98:9c:d8}
	I0929 11:30:47.984505  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined IP address 192.168.39.76 and MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:47.984746  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHPort
	I0929 11:30:47.984964  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHKeyPath
	I0929 11:30:47.985145  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHKeyPath
	I0929 11:30:47.985345  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHUsername
	I0929 11:30:47.985533  595895 main.go:141] libmachine: Using SSH client type: native
	I0929 11:30:47.985753  595895 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.39.76 22 <nil> <nil>}
	I0929 11:30:47.985769  595895 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0929 11:30:48.944806  595895 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink '/etc/systemd/system/multi-user.target.wants/docker.service' → '/usr/lib/systemd/system/docker.service'.
	
	I0929 11:30:48.944837  595895 main.go:141] libmachine: Checking connection to Docker...
	I0929 11:30:48.944847  595895 main.go:141] libmachine: (addons-214441) Calling .GetURL
	I0929 11:30:48.946423  595895 main.go:141] libmachine: (addons-214441) DBG | using libvirt version 8000000
	I0929 11:30:48.949334  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:48.949705  595895 main.go:141] libmachine: (addons-214441) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:9c:d8", ip: ""} in network mk-addons-214441: {Iface:virbr1 ExpiryTime:2025-09-29 12:30:43 +0000 UTC Type:0 Mac:52:54:00:98:9c:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:addons-214441 Clientid:01:52:54:00:98:9c:d8}
	I0929 11:30:48.949727  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined IP address 192.168.39.76 and MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:48.949905  595895 main.go:141] libmachine: Docker is up and running!
	I0929 11:30:48.949918  595895 main.go:141] libmachine: Reticulating splines...
	I0929 11:30:48.949926  595895 client.go:171] duration metric: took 22.382562322s to LocalClient.Create
	I0929 11:30:48.949961  595895 start.go:167] duration metric: took 22.382646372s to libmachine.API.Create "addons-214441"
	I0929 11:30:48.949977  595895 start.go:293] postStartSetup for "addons-214441" (driver="kvm2")
	I0929 11:30:48.949995  595895 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0929 11:30:48.950016  595895 main.go:141] libmachine: (addons-214441) Calling .DriverName
	I0929 11:30:48.950285  595895 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0929 11:30:48.950309  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHHostname
	I0929 11:30:48.952588  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:48.952941  595895 main.go:141] libmachine: (addons-214441) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:9c:d8", ip: ""} in network mk-addons-214441: {Iface:virbr1 ExpiryTime:2025-09-29 12:30:43 +0000 UTC Type:0 Mac:52:54:00:98:9c:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:addons-214441 Clientid:01:52:54:00:98:9c:d8}
	I0929 11:30:48.952973  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined IP address 192.168.39.76 and MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:48.953140  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHPort
	I0929 11:30:48.953358  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHKeyPath
	I0929 11:30:48.953522  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHUsername
	I0929 11:30:48.953678  595895 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21654-591397/.minikube/machines/addons-214441/id_rsa Username:docker}
	I0929 11:30:49.038834  595895 ssh_runner.go:195] Run: cat /etc/os-release
	I0929 11:30:49.044530  595895 info.go:137] Remote host: Buildroot 2025.02
	I0929 11:30:49.044562  595895 filesync.go:126] Scanning /home/jenkins/minikube-integration/21654-591397/.minikube/addons for local assets ...
	I0929 11:30:49.044653  595895 filesync.go:126] Scanning /home/jenkins/minikube-integration/21654-591397/.minikube/files for local assets ...
	I0929 11:30:49.044700  595895 start.go:296] duration metric: took 94.715435ms for postStartSetup
	I0929 11:30:49.044748  595895 main.go:141] libmachine: (addons-214441) Calling .GetConfigRaw
	I0929 11:30:49.045427  595895 main.go:141] libmachine: (addons-214441) Calling .GetIP
	I0929 11:30:49.048440  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:49.048801  595895 main.go:141] libmachine: (addons-214441) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:9c:d8", ip: ""} in network mk-addons-214441: {Iface:virbr1 ExpiryTime:2025-09-29 12:30:43 +0000 UTC Type:0 Mac:52:54:00:98:9c:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:addons-214441 Clientid:01:52:54:00:98:9c:d8}
	I0929 11:30:49.048825  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined IP address 192.168.39.76 and MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:49.049194  595895 profile.go:143] Saving config to /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/addons-214441/config.json ...
	I0929 11:30:49.049405  595895 start.go:128] duration metric: took 22.499712752s to createHost
	I0929 11:30:49.049432  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHHostname
	I0929 11:30:49.052122  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:49.052625  595895 main.go:141] libmachine: (addons-214441) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:9c:d8", ip: ""} in network mk-addons-214441: {Iface:virbr1 ExpiryTime:2025-09-29 12:30:43 +0000 UTC Type:0 Mac:52:54:00:98:9c:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:addons-214441 Clientid:01:52:54:00:98:9c:d8}
	I0929 11:30:49.052654  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined IP address 192.168.39.76 and MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:49.052915  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHPort
	I0929 11:30:49.053180  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHKeyPath
	I0929 11:30:49.053373  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHKeyPath
	I0929 11:30:49.053538  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHUsername
	I0929 11:30:49.053724  595895 main.go:141] libmachine: Using SSH client type: native
	I0929 11:30:49.053929  595895 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.39.76 22 <nil> <nil>}
	I0929 11:30:49.053940  595895 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0929 11:30:49.163416  595895 main.go:141] libmachine: SSH cmd err, output: <nil>: 1759145449.126116077
	
	I0929 11:30:49.163441  595895 fix.go:216] guest clock: 1759145449.126116077
	I0929 11:30:49.163449  595895 fix.go:229] Guest: 2025-09-29 11:30:49.126116077 +0000 UTC Remote: 2025-09-29 11:30:49.049418276 +0000 UTC m=+22.624163516 (delta=76.697801ms)
	I0929 11:30:49.163493  595895 fix.go:200] guest clock delta is within tolerance: 76.697801ms
	I0929 11:30:49.163499  595895 start.go:83] releasing machines lock for "addons-214441", held for 22.613874794s
	I0929 11:30:49.163528  595895 main.go:141] libmachine: (addons-214441) Calling .DriverName
	I0929 11:30:49.163838  595895 main.go:141] libmachine: (addons-214441) Calling .GetIP
	I0929 11:30:49.166822  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:49.167209  595895 main.go:141] libmachine: (addons-214441) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:9c:d8", ip: ""} in network mk-addons-214441: {Iface:virbr1 ExpiryTime:2025-09-29 12:30:43 +0000 UTC Type:0 Mac:52:54:00:98:9c:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:addons-214441 Clientid:01:52:54:00:98:9c:d8}
	I0929 11:30:49.167249  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined IP address 192.168.39.76 and MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:49.167420  595895 main.go:141] libmachine: (addons-214441) Calling .DriverName
	I0929 11:30:49.168022  595895 main.go:141] libmachine: (addons-214441) Calling .DriverName
	I0929 11:30:49.168252  595895 main.go:141] libmachine: (addons-214441) Calling .DriverName
	I0929 11:30:49.168368  595895 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0929 11:30:49.168430  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHHostname
	I0929 11:30:49.168489  595895 ssh_runner.go:195] Run: cat /version.json
	I0929 11:30:49.168513  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHHostname
	I0929 11:30:49.172018  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:49.172253  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:49.172513  595895 main.go:141] libmachine: (addons-214441) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:9c:d8", ip: ""} in network mk-addons-214441: {Iface:virbr1 ExpiryTime:2025-09-29 12:30:43 +0000 UTC Type:0 Mac:52:54:00:98:9c:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:addons-214441 Clientid:01:52:54:00:98:9c:d8}
	I0929 11:30:49.172540  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined IP address 192.168.39.76 and MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:49.172628  595895 main.go:141] libmachine: (addons-214441) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:9c:d8", ip: ""} in network mk-addons-214441: {Iface:virbr1 ExpiryTime:2025-09-29 12:30:43 +0000 UTC Type:0 Mac:52:54:00:98:9c:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:addons-214441 Clientid:01:52:54:00:98:9c:d8}
	I0929 11:30:49.172666  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined IP address 192.168.39.76 and MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:49.172701  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHPort
	I0929 11:30:49.172958  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHPort
	I0929 11:30:49.173000  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHKeyPath
	I0929 11:30:49.173136  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHKeyPath
	I0929 11:30:49.173213  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHUsername
	I0929 11:30:49.173301  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHUsername
	I0929 11:30:49.173395  595895 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21654-591397/.minikube/machines/addons-214441/id_rsa Username:docker}
	I0929 11:30:49.173457  595895 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21654-591397/.minikube/machines/addons-214441/id_rsa Username:docker}
	I0929 11:30:49.251709  595895 ssh_runner.go:195] Run: systemctl --version
	I0929 11:30:49.275600  595895 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0929 11:30:49.282636  595895 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0929 11:30:49.282710  595895 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0929 11:30:49.304880  595895 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0929 11:30:49.304913  595895 start.go:495] detecting cgroup driver to use...
	I0929 11:30:49.305043  595895 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0929 11:30:49.330757  595895 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0929 11:30:49.345061  595895 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0929 11:30:49.359226  595895 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0929 11:30:49.359329  595895 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0929 11:30:49.373874  595895 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0929 11:30:49.388075  595895 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0929 11:30:49.401811  595895 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0929 11:30:49.415626  595895 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0929 11:30:49.431189  595895 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0929 11:30:49.445445  595895 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0929 11:30:49.459477  595895 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0929 11:30:49.473176  595895 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0929 11:30:49.485689  595895 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0929 11:30:49.485783  595895 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0929 11:30:49.499975  595895 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0929 11:30:49.513013  595895 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 11:30:49.660311  595895 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0929 11:30:49.703655  595895 start.go:495] detecting cgroup driver to use...
	I0929 11:30:49.703755  595895 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0929 11:30:49.722813  595895 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0929 11:30:49.750032  595895 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0929 11:30:49.777529  595895 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0929 11:30:49.795732  595895 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0929 11:30:49.813375  595895 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0929 11:30:49.851205  595895 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0929 11:30:49.869489  595895 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0929 11:30:49.896122  595895 ssh_runner.go:195] Run: which cri-dockerd
	I0929 11:30:49.900877  595895 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0929 11:30:49.914013  595895 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I0929 11:30:49.937663  595895 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0929 11:30:50.087078  595895 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0929 11:30:50.258242  595895 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I0929 11:30:50.258407  595895 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0929 11:30:50.281600  595895 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0929 11:30:50.297843  595895 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 11:30:50.442188  595895 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0929 11:30:51.468324  595895 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.026092315s)
	I0929 11:30:51.468405  595895 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0929 11:30:51.485284  595895 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0929 11:30:51.502338  595895 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0929 11:30:51.520247  595895 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0929 11:30:51.674618  595895 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0929 11:30:51.823542  595895 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 11:30:51.969743  595895 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0929 11:30:52.010885  595895 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0929 11:30:52.027992  595895 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 11:30:52.187556  595895 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0929 11:30:52.300820  595895 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0929 11:30:52.324658  595895 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0929 11:30:52.324786  595895 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0929 11:30:52.331994  595895 start.go:563] Will wait 60s for crictl version
	I0929 11:30:52.332070  595895 ssh_runner.go:195] Run: which crictl
	I0929 11:30:52.336923  595895 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0929 11:30:52.378177  595895 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.4.0
	RuntimeApiVersion:  v1
	I0929 11:30:52.378280  595895 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0929 11:30:52.410851  595895 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0929 11:30:52.543475  595895 out.go:252] * Preparing Kubernetes v1.34.0 on Docker 28.4.0 ...
	I0929 11:30:52.543553  595895 main.go:141] libmachine: (addons-214441) Calling .GetIP
	I0929 11:30:52.546859  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:52.547288  595895 main.go:141] libmachine: (addons-214441) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:9c:d8", ip: ""} in network mk-addons-214441: {Iface:virbr1 ExpiryTime:2025-09-29 12:30:43 +0000 UTC Type:0 Mac:52:54:00:98:9c:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:addons-214441 Clientid:01:52:54:00:98:9c:d8}
	I0929 11:30:52.547313  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined IP address 192.168.39.76 and MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:52.547612  595895 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0929 11:30:52.553031  595895 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0929 11:30:52.570843  595895 kubeadm.go:875] updating cluster {Name:addons-214441 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-214
441 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.76 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fal
se DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0929 11:30:52.570982  595895 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0929 11:30:52.571045  595895 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0929 11:30:52.589813  595895 docker.go:691] Got preloaded images: 
	I0929 11:30:52.589850  595895 docker.go:697] registry.k8s.io/kube-apiserver:v1.34.0 wasn't preloaded
	I0929 11:30:52.589920  595895 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0929 11:30:52.603859  595895 ssh_runner.go:195] Run: which lz4
	I0929 11:30:52.608929  595895 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0929 11:30:52.614449  595895 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0929 11:30:52.614480  595895 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21654-591397/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (353447550 bytes)
	I0929 11:30:54.030641  595895 docker.go:655] duration metric: took 1.421784291s to copy over tarball
	I0929 11:30:54.030729  595895 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0929 11:30:55.448691  595895 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.417923545s)
	I0929 11:30:55.448737  595895 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0929 11:30:55.496341  595895 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0929 11:30:55.514175  595895 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2632 bytes)
	I0929 11:30:55.539628  595895 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0929 11:30:55.556201  595895 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 11:30:55.705196  595895 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0929 11:30:57.773379  595895 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.068131004s)
	I0929 11:30:57.773509  595895 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0929 11:30:57.795878  595895 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.0
	registry.k8s.io/kube-controller-manager:v1.34.0
	registry.k8s.io/kube-proxy:v1.34.0
	registry.k8s.io/kube-scheduler:v1.34.0
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0929 11:30:57.795910  595895 cache_images.go:85] Images are preloaded, skipping loading
	I0929 11:30:57.795931  595895 kubeadm.go:926] updating node { 192.168.39.76 8443 v1.34.0 docker true true} ...
	I0929 11:30:57.796049  595895 kubeadm.go:938] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-214441 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.76
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:addons-214441 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0929 11:30:57.796127  595895 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0929 11:30:57.852690  595895 cni.go:84] Creating CNI manager for ""
	I0929 11:30:57.852756  595895 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0929 11:30:57.852774  595895 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0929 11:30:57.852803  595895 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.76 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-214441 NodeName:addons-214441 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.76"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.76 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/ku
bernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0929 11:30:57.852981  595895 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.76
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-214441"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.76"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.76"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0929 11:30:57.853053  595895 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0929 11:30:57.866164  595895 binaries.go:44] Found k8s binaries, skipping transfer
	I0929 11:30:57.866236  595895 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0929 11:30:57.879054  595895 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0929 11:30:57.901136  595895 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0929 11:30:57.922808  595895 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2217 bytes)
	I0929 11:30:57.944391  595895 ssh_runner.go:195] Run: grep 192.168.39.76	control-plane.minikube.internal$ /etc/hosts
	I0929 11:30:57.949077  595895 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.76	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0929 11:30:57.965713  595895 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 11:30:58.115608  595895 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0929 11:30:58.151915  595895 certs.go:68] Setting up /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/addons-214441 for IP: 192.168.39.76
	I0929 11:30:58.151940  595895 certs.go:194] generating shared ca certs ...
	I0929 11:30:58.151960  595895 certs.go:226] acquiring lock for ca certs: {Name:mk707c73ecd79d5343eca8617a792346e0c7ccb4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 11:30:58.152119  595895 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21654-591397/.minikube/ca.key
	I0929 11:30:58.470474  595895 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21654-591397/.minikube/ca.crt ...
	I0929 11:30:58.470507  595895 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21654-591397/.minikube/ca.crt: {Name:mk182656d7edea57f023d2e0db199cb4225a8b4e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 11:30:58.470704  595895 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21654-591397/.minikube/ca.key ...
	I0929 11:30:58.470715  595895 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21654-591397/.minikube/ca.key: {Name:mkd9949b3876b9f68542fba6d581787f4502134f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 11:30:58.470791  595895 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21654-591397/.minikube/proxy-client-ca.key
	I0929 11:30:58.721631  595895 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21654-591397/.minikube/proxy-client-ca.crt ...
	I0929 11:30:58.721664  595895 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21654-591397/.minikube/proxy-client-ca.crt: {Name:mk28d9b982dd4335b19ce60c764e1cd1a4d53764 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 11:30:58.721838  595895 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21654-591397/.minikube/proxy-client-ca.key ...
	I0929 11:30:58.721850  595895 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21654-591397/.minikube/proxy-client-ca.key: {Name:mk92f9d60795b7f581dcb4003e857f2fb68fb997 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 11:30:58.721920  595895 certs.go:256] generating profile certs ...
	I0929 11:30:58.721989  595895 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/addons-214441/client.key
	I0929 11:30:58.722004  595895 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/addons-214441/client.crt with IP's: []
	I0929 11:30:59.043304  595895 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/addons-214441/client.crt ...
	I0929 11:30:59.043336  595895 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/addons-214441/client.crt: {Name:mkd724da95490eed1b0581ef6c65a2b1785468b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 11:30:59.043499  595895 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/addons-214441/client.key ...
	I0929 11:30:59.043510  595895 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/addons-214441/client.key: {Name:mkba543125a928af6b44a2eb304c49514c816581 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 11:30:59.043578  595895 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/addons-214441/apiserver.key.adbef5ab
	I0929 11:30:59.043598  595895 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/addons-214441/apiserver.crt.adbef5ab with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.76]
	I0929 11:30:59.456164  595895 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/addons-214441/apiserver.crt.adbef5ab ...
	I0929 11:30:59.456200  595895 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/addons-214441/apiserver.crt.adbef5ab: {Name:mk5a23687be38fbd7ef5257880d1d7f5b199f933 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 11:30:59.456424  595895 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/addons-214441/apiserver.key.adbef5ab ...
	I0929 11:30:59.456443  595895 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/addons-214441/apiserver.key.adbef5ab: {Name:mke7b9b847497d2728644e9b30a8393a50e57e5a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 11:30:59.456526  595895 certs.go:381] copying /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/addons-214441/apiserver.crt.adbef5ab -> /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/addons-214441/apiserver.crt
	I0929 11:30:59.456638  595895 certs.go:385] copying /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/addons-214441/apiserver.key.adbef5ab -> /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/addons-214441/apiserver.key
	I0929 11:30:59.456705  595895 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/addons-214441/proxy-client.key
	I0929 11:30:59.456726  595895 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/addons-214441/proxy-client.crt with IP's: []
	I0929 11:30:59.785388  595895 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/addons-214441/proxy-client.crt ...
	I0929 11:30:59.785424  595895 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/addons-214441/proxy-client.crt: {Name:mkb2afc6ab3119c9842fe1ce2f48d7c6196dbfb1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 11:30:59.785611  595895 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/addons-214441/proxy-client.key ...
	I0929 11:30:59.785642  595895 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/addons-214441/proxy-client.key: {Name:mk6b37b3ae22881d553c47031d96c6f22bdfded2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 11:30:59.785833  595895 certs.go:484] found cert: /home/jenkins/minikube-integration/21654-591397/.minikube/certs/ca-key.pem (1675 bytes)
	I0929 11:30:59.785879  595895 certs.go:484] found cert: /home/jenkins/minikube-integration/21654-591397/.minikube/certs/ca.pem (1082 bytes)
	I0929 11:30:59.785905  595895 certs.go:484] found cert: /home/jenkins/minikube-integration/21654-591397/.minikube/certs/cert.pem (1123 bytes)
	I0929 11:30:59.785932  595895 certs.go:484] found cert: /home/jenkins/minikube-integration/21654-591397/.minikube/certs/key.pem (1675 bytes)
	I0929 11:30:59.786662  595895 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21654-591397/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0929 11:30:59.821270  595895 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21654-591397/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0929 11:30:59.853588  595895 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21654-591397/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0929 11:30:59.885559  595895 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21654-591397/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0929 11:30:59.916538  595895 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/addons-214441/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0929 11:30:59.948991  595895 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/addons-214441/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0929 11:30:59.981478  595895 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/addons-214441/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0929 11:31:00.014753  595895 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/addons-214441/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0929 11:31:00.046891  595895 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21654-591397/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0929 11:31:00.079370  595895 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0929 11:31:00.101600  595895 ssh_runner.go:195] Run: openssl version
	I0929 11:31:00.108829  595895 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0929 11:31:00.123448  595895 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0929 11:31:00.129416  595895 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 29 11:30 /usr/share/ca-certificates/minikubeCA.pem
	I0929 11:31:00.129502  595895 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0929 11:31:00.137583  595895 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0929 11:31:00.152396  595895 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0929 11:31:00.157895  595895 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0929 11:31:00.157960  595895 kubeadm.go:392] StartCluster: {Name:addons-214441 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-214441
Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.76 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 11:31:00.158083  595895 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0929 11:31:00.176917  595895 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0929 11:31:00.190119  595895 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0929 11:31:00.203558  595895 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0929 11:31:00.216736  595895 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0929 11:31:00.216758  595895 kubeadm.go:157] found existing configuration files:
	
	I0929 11:31:00.216805  595895 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0929 11:31:00.229008  595895 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0929 11:31:00.229138  595895 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0929 11:31:00.242441  595895 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0929 11:31:00.254460  595895 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0929 11:31:00.254523  595895 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0929 11:31:00.268124  595895 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0929 11:31:00.284523  595895 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0929 11:31:00.284596  595895 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0929 11:31:00.297510  595895 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0929 11:31:00.311858  595895 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0929 11:31:00.311927  595895 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0929 11:31:00.329319  595895 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0929 11:31:00.392668  595895 kubeadm.go:310] [init] Using Kubernetes version: v1.34.0
	I0929 11:31:00.392776  595895 kubeadm.go:310] [preflight] Running pre-flight checks
	I0929 11:31:00.500945  595895 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0929 11:31:00.501073  595895 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0929 11:31:00.501248  595895 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0929 11:31:00.518470  595895 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0929 11:31:00.521672  595895 out.go:252]   - Generating certificates and keys ...
	I0929 11:31:00.521778  595895 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0929 11:31:00.521835  595895 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0929 11:31:00.844406  595895 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0929 11:31:01.356940  595895 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0929 11:31:01.469316  595895 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0929 11:31:01.609628  595895 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0929 11:31:01.854048  595895 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0929 11:31:01.854239  595895 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-214441 localhost] and IPs [192.168.39.76 127.0.0.1 ::1]
	I0929 11:31:02.222219  595895 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0929 11:31:02.222361  595895 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-214441 localhost] and IPs [192.168.39.76 127.0.0.1 ::1]
	I0929 11:31:02.331774  595895 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0929 11:31:02.452417  595895 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0929 11:31:03.277600  595895 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0929 11:31:03.277709  595895 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0929 11:31:03.337296  595895 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0929 11:31:03.576740  595895 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0929 11:31:03.754957  595895 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0929 11:31:04.028596  595895 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0929 11:31:04.458901  595895 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0929 11:31:04.459731  595895 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0929 11:31:04.461956  595895 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0929 11:31:04.463895  595895 out.go:252]   - Booting up control plane ...
	I0929 11:31:04.464031  595895 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0929 11:31:04.464116  595895 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0929 11:31:04.464220  595895 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0929 11:31:04.482430  595895 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0929 11:31:04.482595  595895 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0929 11:31:04.490659  595895 kubeadm.go:310] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0929 11:31:04.490827  595895 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0929 11:31:04.490920  595895 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0929 11:31:04.666361  595895 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0929 11:31:04.666495  595895 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0929 11:31:05.175870  595895 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 510.006022ms
	I0929 11:31:05.187944  595895 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0929 11:31:05.188057  595895 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.39.76:8443/livez
	I0929 11:31:05.188256  595895 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0929 11:31:05.188362  595895 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0929 11:31:07.767053  595895 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 2.579446651s
	I0929 11:31:09.215755  595895 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 4.029766048s
	I0929 11:31:11.189186  595895 kubeadm.go:310] [control-plane-check] kube-apiserver is healthy after 6.002998119s
	I0929 11:31:11.214239  595895 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0929 11:31:11.232892  595895 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0929 11:31:11.255389  595895 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0929 11:31:11.255580  595895 kubeadm.go:310] [mark-control-plane] Marking the node addons-214441 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0929 11:31:11.270844  595895 kubeadm.go:310] [bootstrap-token] Using token: 7wgemt.sdnt4jx2dgy9ll51
	I0929 11:31:11.272442  595895 out.go:252]   - Configuring RBAC rules ...
	I0929 11:31:11.272557  595895 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0929 11:31:11.279364  595895 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0929 11:31:11.294463  595895 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0929 11:31:11.298793  595895 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0929 11:31:11.306582  595895 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0929 11:31:11.323727  595895 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0929 11:31:11.601710  595895 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0929 11:31:12.069553  595895 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0929 11:31:12.597044  595895 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0929 11:31:12.597931  595895 kubeadm.go:310] 
	I0929 11:31:12.598017  595895 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0929 11:31:12.598026  595895 kubeadm.go:310] 
	I0929 11:31:12.598142  595895 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0929 11:31:12.598153  595895 kubeadm.go:310] 
	I0929 11:31:12.598181  595895 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0929 11:31:12.598281  595895 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0929 11:31:12.598374  595895 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0929 11:31:12.598390  595895 kubeadm.go:310] 
	I0929 11:31:12.598436  595895 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0929 11:31:12.598442  595895 kubeadm.go:310] 
	I0929 11:31:12.598481  595895 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0929 11:31:12.598497  595895 kubeadm.go:310] 
	I0929 11:31:12.598577  595895 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0929 11:31:12.598692  595895 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0929 11:31:12.598809  595895 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0929 11:31:12.598828  595895 kubeadm.go:310] 
	I0929 11:31:12.598937  595895 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0929 11:31:12.599041  595895 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0929 11:31:12.599055  595895 kubeadm.go:310] 
	I0929 11:31:12.599196  595895 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 7wgemt.sdnt4jx2dgy9ll51 \
	I0929 11:31:12.599332  595895 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f2c60916dfe453d33b3c30594ac6ac02ea520a59a4bdac7119cd9e587175e1eb \
	I0929 11:31:12.599365  595895 kubeadm.go:310] 	--control-plane 
	I0929 11:31:12.599397  595895 kubeadm.go:310] 
	I0929 11:31:12.599486  595895 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0929 11:31:12.599496  595895 kubeadm.go:310] 
	I0929 11:31:12.599568  595895 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 7wgemt.sdnt4jx2dgy9ll51 \
	I0929 11:31:12.599705  595895 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f2c60916dfe453d33b3c30594ac6ac02ea520a59a4bdac7119cd9e587175e1eb 
	I0929 11:31:12.601217  595895 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0929 11:31:12.601272  595895 cni.go:84] Creating CNI manager for ""
	I0929 11:31:12.601305  595895 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0929 11:31:12.603223  595895 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I0929 11:31:12.604766  595895 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0929 11:31:12.618554  595895 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0929 11:31:12.641768  595895 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0929 11:31:12.641942  595895 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 11:31:12.641954  595895 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-214441 minikube.k8s.io/updated_at=2025_09_29T11_31_12_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=b8c533eb3dce8338d3d4e7231ea97d1c44ed6f81 minikube.k8s.io/name=addons-214441 minikube.k8s.io/primary=true
	I0929 11:31:12.682767  595895 ops.go:34] apiserver oom_adj: -16
	I0929 11:31:12.800130  595895 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 11:31:13.300439  595895 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 11:31:13.800339  595895 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 11:31:14.300644  595895 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 11:31:14.800381  595895 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 11:31:15.301049  595895 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 11:31:15.801207  595895 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 11:31:16.301226  595895 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 11:31:16.801024  595895 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 11:31:17.300849  595895 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 11:31:17.440215  595895 kubeadm.go:1105] duration metric: took 4.798376612s to wait for elevateKubeSystemPrivileges
	I0929 11:31:17.440271  595895 kubeadm.go:394] duration metric: took 17.282308974s to StartCluster
	I0929 11:31:17.440297  595895 settings.go:142] acquiring lock: {Name:mk832bb073af4ae47756dd4494ea087d7aa99c2a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 11:31:17.440448  595895 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21654-591397/kubeconfig
	I0929 11:31:17.441186  595895 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21654-591397/kubeconfig: {Name:mk64b4db01785e3abeedb000f7d1263b1f56db2d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 11:31:17.441409  595895 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0929 11:31:17.441416  595895 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.76 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0929 11:31:17.441496  595895 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0929 11:31:17.441684  595895 config.go:182] Loaded profile config "addons-214441": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0929 11:31:17.441696  595895 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-214441"
	I0929 11:31:17.441708  595895 addons.go:69] Setting yakd=true in profile "addons-214441"
	I0929 11:31:17.441736  595895 addons.go:238] Setting addon yakd=true in "addons-214441"
	I0929 11:31:17.441757  595895 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-214441"
	I0929 11:31:17.441709  595895 addons.go:69] Setting ingress=true in profile "addons-214441"
	I0929 11:31:17.441784  595895 addons.go:238] Setting addon ingress=true in "addons-214441"
	I0929 11:31:17.441793  595895 host.go:66] Checking if "addons-214441" exists ...
	I0929 11:31:17.441803  595895 host.go:66] Checking if "addons-214441" exists ...
	I0929 11:31:17.441799  595895 addons.go:69] Setting default-storageclass=true in profile "addons-214441"
	I0929 11:31:17.441840  595895 addons.go:69] Setting gcp-auth=true in profile "addons-214441"
	I0929 11:31:17.441876  595895 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-214441"
	I0929 11:31:17.441886  595895 mustload.go:65] Loading cluster: addons-214441
	I0929 11:31:17.441893  595895 host.go:66] Checking if "addons-214441" exists ...
	I0929 11:31:17.442145  595895 addons.go:69] Setting registry=true in profile "addons-214441"
	I0929 11:31:17.442160  595895 addons.go:238] Setting addon registry=true in "addons-214441"
	I0929 11:31:17.442191  595895 host.go:66] Checking if "addons-214441" exists ...
	I0929 11:31:17.442280  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:31:17.442300  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:31:17.442353  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:31:17.442366  595895 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-214441"
	I0929 11:31:17.442371  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:31:17.442380  595895 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-214441"
	I0929 11:31:17.442381  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:31:17.442385  595895 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-214441"
	I0929 11:31:17.442396  595895 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-214441"
	I0929 11:31:17.442399  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:31:17.442425  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:31:17.442400  595895 addons.go:69] Setting cloud-spanner=true in profile "addons-214441"
	I0929 11:31:17.442448  595895 addons.go:69] Setting registry-creds=true in profile "addons-214441"
	I0929 11:31:17.442456  595895 addons.go:238] Setting addon cloud-spanner=true in "addons-214441"
	I0929 11:31:17.442469  595895 addons.go:238] Setting addon registry-creds=true in "addons-214441"
	I0929 11:31:17.442478  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:31:17.442491  595895 addons.go:69] Setting storage-provisioner=true in profile "addons-214441"
	I0929 11:31:17.442514  595895 addons.go:238] Setting addon storage-provisioner=true in "addons-214441"
	I0929 11:31:17.442543  595895 host.go:66] Checking if "addons-214441" exists ...
	I0929 11:31:17.442544  595895 addons.go:69] Setting inspektor-gadget=true in profile "addons-214441"
	I0929 11:31:17.442557  595895 addons.go:238] Setting addon inspektor-gadget=true in "addons-214441"
	I0929 11:31:17.442563  595895 addons.go:69] Setting ingress-dns=true in profile "addons-214441"
	I0929 11:31:17.442575  595895 addons.go:238] Setting addon ingress-dns=true in "addons-214441"
	I0929 11:31:17.442588  595895 addons.go:69] Setting metrics-server=true in profile "addons-214441"
	I0929 11:31:17.442591  595895 addons.go:69] Setting volumesnapshots=true in profile "addons-214441"
	I0929 11:31:17.442599  595895 addons.go:238] Setting addon metrics-server=true in "addons-214441"
	I0929 11:31:17.442610  595895 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-214441"
	I0929 11:31:17.442602  595895 config.go:182] Loaded profile config "addons-214441": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0929 11:31:17.442620  595895 addons.go:238] Setting addon volumesnapshots=true in "addons-214441"
	I0929 11:31:17.442622  595895 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-214441"
	I0929 11:31:17.442631  595895 addons.go:69] Setting volcano=true in profile "addons-214441"
	I0929 11:31:17.442647  595895 addons.go:238] Setting addon volcano=true in "addons-214441"
	I0929 11:31:17.442826  595895 host.go:66] Checking if "addons-214441" exists ...
	I0929 11:31:17.442847  595895 host.go:66] Checking if "addons-214441" exists ...
	I0929 11:31:17.442963  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:31:17.443004  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:31:17.443177  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:31:17.443198  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:31:17.443212  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:31:17.443242  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:31:17.443255  595895 host.go:66] Checking if "addons-214441" exists ...
	I0929 11:31:17.443270  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:31:17.443292  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:31:17.443439  595895 out.go:179] * Verifying Kubernetes components...
	I0929 11:31:17.443489  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:31:17.443521  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:31:17.443564  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:31:17.443603  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:31:17.443459  595895 host.go:66] Checking if "addons-214441" exists ...
	I0929 11:31:17.443699  595895 host.go:66] Checking if "addons-214441" exists ...
	I0929 11:31:17.443852  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:31:17.443879  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:31:17.443895  595895 host.go:66] Checking if "addons-214441" exists ...
	I0929 11:31:17.444137  595895 host.go:66] Checking if "addons-214441" exists ...
	I0929 11:31:17.444199  595895 host.go:66] Checking if "addons-214441" exists ...
	I0929 11:31:17.444468  595895 host.go:66] Checking if "addons-214441" exists ...
	I0929 11:31:17.454269  595895 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 11:31:17.455462  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:31:17.455556  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:31:17.457160  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:31:17.457213  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:31:17.458697  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:31:17.458765  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:31:17.459732  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37039
	I0929 11:31:17.459901  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:31:17.459979  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:31:17.460127  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:31:17.460161  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:31:17.460170  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:31:17.460239  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:31:17.460291  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44679
	I0929 11:31:17.460695  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:31:17.463901  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:31:17.463928  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:31:17.464092  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:31:17.465162  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:31:17.465408  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:31:17.466171  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:31:17.466824  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:31:17.467158  595895 main.go:141] libmachine: (addons-214441) Calling .GetState
	I0929 11:31:17.479447  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:31:17.479512  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:31:17.482323  595895 addons.go:238] Setting addon default-storageclass=true in "addons-214441"
	I0929 11:31:17.482391  595895 host.go:66] Checking if "addons-214441" exists ...
	I0929 11:31:17.482773  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:31:17.482798  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:31:17.493064  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45797
	I0929 11:31:17.493710  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40233
	I0929 11:31:17.496980  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:31:17.497697  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:31:17.497723  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:31:17.498583  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:31:17.499544  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:31:17.500891  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:31:17.502188  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:31:17.503325  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:31:17.503345  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:31:17.503676  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33541
	I0929 11:31:17.503826  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:31:17.504644  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:31:17.504730  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:31:17.505209  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:31:17.506256  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:31:17.506279  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:31:17.506340  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36303
	I0929 11:31:17.506984  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46129
	I0929 11:31:17.507294  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:31:17.507677  595895 main.go:141] libmachine: (addons-214441) Calling .GetState
	I0929 11:31:17.507745  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37135
	I0929 11:31:17.508552  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:31:17.509057  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:31:17.509394  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:31:17.509407  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:31:17.509415  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:31:17.510041  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:31:17.510142  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:31:17.510163  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:31:17.511579  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:31:17.513259  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:31:17.513521  595895 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-214441"
	I0929 11:31:17.513538  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46151
	I0929 11:31:17.513575  595895 host.go:66] Checking if "addons-214441" exists ...
	I0929 11:31:17.514124  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:31:17.514166  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:31:17.511927  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:31:17.514352  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:31:17.513596  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42277
	I0929 11:31:17.520718  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:31:17.520752  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44361
	I0929 11:31:17.521039  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:31:17.521092  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33947
	I0929 11:31:17.521207  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43367
	I0929 11:31:17.520724  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37739
	I0929 11:31:17.522317  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:31:17.522444  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:31:17.522469  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:31:17.522507  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:31:17.522852  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:31:17.522920  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:31:17.523211  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:31:17.523225  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:31:17.523306  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:31:17.523461  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:31:17.523473  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:31:17.524082  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:31:17.524376  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:31:17.524523  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:31:17.524535  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:31:17.524631  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:31:17.524746  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34633
	I0929 11:31:17.529249  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:31:17.529354  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:31:17.529387  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:31:17.529766  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:31:17.529766  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:31:17.529799  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:31:17.529807  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:31:17.529908  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:31:17.530061  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:31:17.530343  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:31:17.530371  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:31:17.530465  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:31:17.530878  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:31:17.530932  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:31:17.531382  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:31:17.531639  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:31:17.531658  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:31:17.532124  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:31:17.532483  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:31:17.533015  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:31:17.533033  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:31:17.533472  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:31:17.533508  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:31:17.534270  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:31:17.535229  595895 main.go:141] libmachine: (addons-214441) Calling .GetState
	I0929 11:31:17.535779  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:31:17.535886  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:31:17.537511  595895 host.go:66] Checking if "addons-214441" exists ...
	I0929 11:31:17.538187  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39035
	I0929 11:31:17.539952  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:31:17.540005  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:31:17.540222  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39063
	I0929 11:31:17.540575  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43245
	I0929 11:31:17.540786  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:31:17.540854  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:31:17.540890  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:31:17.541625  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:31:17.541647  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:31:17.542032  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:31:17.542195  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:31:17.542600  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:31:17.543176  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:31:17.543185  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:31:17.543199  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:31:17.543204  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:31:17.543307  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32865
	I0929 11:31:17.544136  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:31:17.544545  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:31:17.544610  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:31:17.544640  595895 main.go:141] libmachine: (addons-214441) Calling .GetState
	I0929 11:31:17.545415  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:31:17.545449  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:31:17.546464  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:31:17.546490  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:31:17.546965  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:31:17.547387  595895 main.go:141] libmachine: (addons-214441) Calling .GetState
	I0929 11:31:17.548714  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:31:17.548795  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:31:17.550669  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38149
	I0929 11:31:17.551412  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:31:17.551773  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34225
	I0929 11:31:17.552171  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:31:17.552255  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:31:17.552199  595895 main.go:141] libmachine: (addons-214441) Calling .DriverName
	I0929 11:31:17.552753  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:31:17.552854  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:31:17.553685  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:31:17.553778  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:31:17.554307  595895 main.go:141] libmachine: (addons-214441) Calling .GetState
	I0929 11:31:17.554514  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:31:17.555149  595895 main.go:141] libmachine: (addons-214441) Calling .GetState
	I0929 11:31:17.557383  595895 main.go:141] libmachine: (addons-214441) Calling .DriverName
	I0929 11:31:17.558025  595895 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0929 11:31:17.559210  595895 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0929 11:31:17.559231  595895 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0929 11:31:17.559262  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHHostname
	I0929 11:31:17.559338  595895 out.go:179]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.12.2
	I0929 11:31:17.560620  595895 main.go:141] libmachine: (addons-214441) Calling .DriverName
	I0929 11:31:17.560681  595895 out.go:179]   - Using image docker.io/volcanosh/vc-controller-manager:v1.12.2
	I0929 11:31:17.560823  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34665
	I0929 11:31:17.561393  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:31:17.562236  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:31:17.562295  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:31:17.562751  595895 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.3
	I0929 11:31:17.563140  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:31:17.563492  595895 out.go:179]   - Using image docker.io/volcanosh/vc-scheduler:v1.12.2
	I0929 11:31:17.564252  595895 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0929 11:31:17.564269  595895 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0929 11:31:17.564289  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:31:17.564293  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHHostname
	I0929 11:31:17.564684  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:31:17.564737  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:31:17.565023  595895 main.go:141] libmachine: (addons-214441) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:9c:d8", ip: ""} in network mk-addons-214441: {Iface:virbr1 ExpiryTime:2025-09-29 12:30:43 +0000 UTC Type:0 Mac:52:54:00:98:9c:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:addons-214441 Clientid:01:52:54:00:98:9c:d8}
	I0929 11:31:17.565146  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined IP address 192.168.39.76 and MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:31:17.567800  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHPort
	I0929 11:31:17.568057  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHKeyPath
	I0929 11:31:17.568262  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36805
	I0929 11:31:17.568522  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHUsername
	I0929 11:31:17.568701  595895 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21654-591397/.minikube/machines/addons-214441/id_rsa Username:docker}
	I0929 11:31:17.569229  595895 addons.go:435] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0929 11:31:17.569253  595895 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (498149 bytes)
	I0929 11:31:17.569273  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHHostname
	I0929 11:31:17.569959  595895 main.go:141] libmachine: (addons-214441) Calling .DriverName
	I0929 11:31:17.570047  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:31:17.572257  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:31:17.572409  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:31:17.572423  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:31:17.573470  595895 main.go:141] libmachine: (addons-214441) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:9c:d8", ip: ""} in network mk-addons-214441: {Iface:virbr1 ExpiryTime:2025-09-29 12:30:43 +0000 UTC Type:0 Mac:52:54:00:98:9c:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:addons-214441 Clientid:01:52:54:00:98:9c:d8}
	I0929 11:31:17.573495  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined IP address 192.168.39.76 and MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:31:17.573534  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42077
	I0929 11:31:17.574161  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:31:17.574166  595895 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I0929 11:31:17.574420  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:31:17.574975  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:31:17.575036  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:31:17.575329  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHPort
	I0929 11:31:17.575415  595895 main.go:141] libmachine: (addons-214441) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:9c:d8", ip: ""} in network mk-addons-214441: {Iface:virbr1 ExpiryTime:2025-09-29 12:30:43 +0000 UTC Type:0 Mac:52:54:00:98:9c:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:addons-214441 Clientid:01:52:54:00:98:9c:d8}
	I0929 11:31:17.575430  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined IP address 192.168.39.76 and MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:31:17.575671  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHKeyPath
	I0929 11:31:17.575865  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHUsername
	I0929 11:31:17.576099  595895 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21654-591397/.minikube/machines/addons-214441/id_rsa Username:docker}
	I0929 11:31:17.577061  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHPort
	I0929 11:31:17.577247  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHKeyPath
	I0929 11:31:17.577378  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHUsername
	I0929 11:31:17.577535  595895 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21654-591397/.minikube/machines/addons-214441/id_rsa Username:docker}
	I0929 11:31:17.577554  595895 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0929 11:31:17.577582  595895 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0929 11:31:17.577605  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHHostname
	I0929 11:31:17.579736  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40239
	I0929 11:31:17.580597  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:31:17.581383  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:31:17.581446  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:31:17.582289  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:31:17.582694  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44947
	I0929 11:31:17.582952  595895 main.go:141] libmachine: (addons-214441) Calling .DriverName
	I0929 11:31:17.583853  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:31:17.585630  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46433
	I0929 11:31:17.585637  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38605
	I0929 11:31:17.586733  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:31:17.586755  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:31:17.586846  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:31:17.587240  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:31:17.587458  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:31:17.587548  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:31:17.587503  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37799
	I0929 11:31:17.588342  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:31:17.588817  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:31:17.588838  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:31:17.589534  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35991
	I0929 11:31:17.589680  595895 main.go:141] libmachine: (addons-214441) Calling .GetState
	I0929 11:31:17.589727  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:31:17.589953  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:31:17.590461  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:31:17.590684  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:31:17.590701  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:31:17.590814  595895 main.go:141] libmachine: (addons-214441) Calling .GetState
	I0929 11:31:17.590864  595895 main.go:141] libmachine: (addons-214441) Calling .GetState
	I0929 11:31:17.591866  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:31:17.592243  595895 main.go:141] libmachine: (addons-214441) Calling .GetState
	I0929 11:31:17.592985  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:31:17.593774  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:31:17.593791  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:31:17.594759  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:31:17.595210  595895 main.go:141] libmachine: (addons-214441) Calling .GetState
	I0929 11:31:17.595390  595895 main.go:141] libmachine: (addons-214441) Calling .DriverName
	I0929 11:31:17.596824  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:31:17.597871  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:31:17.598227  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:31:17.598762  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35305
	I0929 11:31:17.599344  595895 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0929 11:31:17.600871  595895 main.go:141] libmachine: (addons-214441) Calling .DriverName
	I0929 11:31:17.600871  595895 main.go:141] libmachine: (addons-214441) Calling .DriverName
	I0929 11:31:17.600871  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:31:17.600928  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:31:17.600961  595895 main.go:141] libmachine: (addons-214441) Calling .DriverName
	I0929 11:31:17.600994  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39573
	I0929 11:31:17.601002  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43843
	I0929 11:31:17.601641  595895 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0929 11:31:17.601827  595895 main.go:141] libmachine: (addons-214441) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:9c:d8", ip: ""} in network mk-addons-214441: {Iface:virbr1 ExpiryTime:2025-09-29 12:30:43 +0000 UTC Type:0 Mac:52:54:00:98:9c:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:addons-214441 Clientid:01:52:54:00:98:9c:d8}
	I0929 11:31:17.601850  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined IP address 192.168.39.76 and MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:31:17.601913  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:31:17.602052  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:31:17.602151  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39993
	I0929 11:31:17.602155  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:31:17.602306  595895 main.go:141] libmachine: (addons-214441) Calling .GetState
	I0929 11:31:17.602590  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:31:17.602610  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:31:17.602811  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHPort
	I0929 11:31:17.602977  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHKeyPath
	I0929 11:31:17.603038  595895 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0929 11:31:17.603089  595895 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0929 11:31:17.603260  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHUsername
	I0929 11:31:17.603328  595895 main.go:141] libmachine: (addons-214441) Calling .DriverName
	I0929 11:31:17.603564  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:31:17.603593  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:31:17.603752  595895 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21654-591397/.minikube/machines/addons-214441/id_rsa Username:docker}
	I0929 11:31:17.604258  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:31:17.604320  595895 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I0929 11:31:17.604825  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:31:17.604525  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:31:17.605686  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:31:17.605694  595895 main.go:141] libmachine: (addons-214441) Calling .DriverName
	I0929 11:31:17.604846  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:31:17.604946  595895 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0929 11:31:17.605125  595895 main.go:141] libmachine: (addons-214441) Calling .GetState
	I0929 11:31:17.606062  595895 main.go:141] libmachine: (addons-214441) Calling .GetState
	I0929 11:31:17.606154  595895 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.44.1
	I0929 11:31:17.606169  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:31:17.606174  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:31:17.607283  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:31:17.607459  595895 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0929 11:31:17.607513  595895 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0929 11:31:17.608000  595895 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I0929 11:31:17.608022  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHHostname
	I0929 11:31:17.607722  595895 main.go:141] libmachine: (addons-214441) Calling .GetState
	I0929 11:31:17.607825  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:31:17.608327  595895 out.go:179]   - Using image docker.io/busybox:stable
	I0929 11:31:17.608504  595895 main.go:141] libmachine: (addons-214441) Calling .GetState
	I0929 11:31:17.609208  595895 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0929 11:31:17.609380  595895 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I0929 11:31:17.609617  595895 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I0929 11:31:17.609695  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHHostname
	I0929 11:31:17.609885  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35811
	I0929 11:31:17.610214  595895 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0929 11:31:17.610480  595895 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0929 11:31:17.610442  595895 main.go:141] libmachine: (addons-214441) Calling .DriverName
	I0929 11:31:17.610634  595895 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 11:31:17.610651  595895 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0929 11:31:17.610666  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHHostname
	I0929 11:31:17.610637  595895 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0929 11:31:17.610551  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHHostname
	I0929 11:31:17.611056  595895 main.go:141] libmachine: (addons-214441) Calling .DriverName
	I0929 11:31:17.611127  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:31:17.611242  595895 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.2
	I0929 11:31:17.612177  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:31:17.612200  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:31:17.612367  595895 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I0929 11:31:17.612539  595895 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.41
	I0929 11:31:17.612558  595895 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0929 11:31:17.612574  595895 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0929 11:31:17.612702  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHHostname
	I0929 11:31:17.612652  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:31:17.613066  595895 main.go:141] libmachine: (addons-214441) Calling .DriverName
	I0929 11:31:17.613132  595895 main.go:141] libmachine: (addons-214441) Calling .GetState
	I0929 11:31:17.613978  595895 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I0929 11:31:17.614058  595895 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0929 11:31:17.614157  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHHostname
	I0929 11:31:17.614015  595895 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I0929 11:31:17.614286  595895 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I0929 11:31:17.614314  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHHostname
	I0929 11:31:17.614339  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40205
	I0929 11:31:17.614532  595895 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0929 11:31:17.614774  595895 main.go:141] libmachine: (addons-214441) Calling .DriverName
	I0929 11:31:17.614918  595895 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0929 11:31:17.615384  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:31:17.615994  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:31:17.616036  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:31:17.616065  595895 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I0929 11:31:17.616139  595895 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0929 11:31:17.616150  595895 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0929 11:31:17.616217  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHHostname
	I0929 11:31:17.616451  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:31:17.616766  595895 main.go:141] libmachine: (addons-214441) Calling .GetState
	I0929 11:31:17.617254  595895 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0929 11:31:17.618390  595895 main.go:141] libmachine: (addons-214441) Calling .DriverName
	I0929 11:31:17.618595  595895 out.go:179]   - Using image docker.io/registry:3.0.0
	I0929 11:31:17.619658  595895 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I0929 11:31:17.619715  595895 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I0929 11:31:17.619728  595895 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0929 11:31:17.619752  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHHostname
	I0929 11:31:17.619788  595895 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0929 11:31:17.620191  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:31:17.620909  595895 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0929 11:31:17.620926  595895 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I0929 11:31:17.621015  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHHostname
	I0929 11:31:17.621216  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:31:17.622235  595895 main.go:141] libmachine: (addons-214441) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:9c:d8", ip: ""} in network mk-addons-214441: {Iface:virbr1 ExpiryTime:2025-09-29 12:30:43 +0000 UTC Type:0 Mac:52:54:00:98:9c:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:addons-214441 Clientid:01:52:54:00:98:9c:d8}
	I0929 11:31:17.622260  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined IP address 192.168.39.76 and MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:31:17.622296  595895 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0929 11:31:17.622987  595895 main.go:141] libmachine: (addons-214441) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:9c:d8", ip: ""} in network mk-addons-214441: {Iface:virbr1 ExpiryTime:2025-09-29 12:30:43 +0000 UTC Type:0 Mac:52:54:00:98:9c:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:addons-214441 Clientid:01:52:54:00:98:9c:d8}
	I0929 11:31:17.623010  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined IP address 192.168.39.76 and MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:31:17.623146  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHPort
	I0929 11:31:17.623384  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHKeyPath
	I0929 11:31:17.623851  595895 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0929 11:31:17.623870  595895 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0929 11:31:17.623891  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHHostname
	I0929 11:31:17.623910  595895 main.go:141] libmachine: (addons-214441) Calling .DriverName
	I0929 11:31:17.623977  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHUsername
	I0929 11:31:17.623991  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:31:17.624284  595895 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0929 11:31:17.624300  595895 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0929 11:31:17.624317  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHHostname
	I0929 11:31:17.624324  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHPort
	I0929 11:31:17.624330  595895 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21654-591397/.minikube/machines/addons-214441/id_rsa Username:docker}
	I0929 11:31:17.624655  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:31:17.624690  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHKeyPath
	I0929 11:31:17.625088  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHUsername
	I0929 11:31:17.625297  595895 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21654-591397/.minikube/machines/addons-214441/id_rsa Username:docker}
	I0929 11:31:17.626099  595895 main.go:141] libmachine: (addons-214441) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:9c:d8", ip: ""} in network mk-addons-214441: {Iface:virbr1 ExpiryTime:2025-09-29 12:30:43 +0000 UTC Type:0 Mac:52:54:00:98:9c:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:addons-214441 Clientid:01:52:54:00:98:9c:d8}
	I0929 11:31:17.626182  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined IP address 192.168.39.76 and MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:31:17.626247  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:31:17.626251  595895 main.go:141] libmachine: (addons-214441) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:9c:d8", ip: ""} in network mk-addons-214441: {Iface:virbr1 ExpiryTime:2025-09-29 12:30:43 +0000 UTC Type:0 Mac:52:54:00:98:9c:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:addons-214441 Clientid:01:52:54:00:98:9c:d8}
	I0929 11:31:17.626597  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined IP address 192.168.39.76 and MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:31:17.626789  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:31:17.626890  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHPort
	I0929 11:31:17.627091  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHPort
	I0929 11:31:17.627238  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHKeyPath
	I0929 11:31:17.627284  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHKeyPath
	I0929 11:31:17.627374  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHUsername
	I0929 11:31:17.627541  595895 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21654-591397/.minikube/machines/addons-214441/id_rsa Username:docker}
	I0929 11:31:17.627907  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:31:17.627938  595895 main.go:141] libmachine: (addons-214441) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:9c:d8", ip: ""} in network mk-addons-214441: {Iface:virbr1 ExpiryTime:2025-09-29 12:30:43 +0000 UTC Type:0 Mac:52:54:00:98:9c:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:addons-214441 Clientid:01:52:54:00:98:9c:d8}
	I0929 11:31:17.627949  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined IP address 192.168.39.76 and MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:31:17.627979  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHUsername
	I0929 11:31:17.628066  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:31:17.628081  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHPort
	I0929 11:31:17.628268  595895 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21654-591397/.minikube/machines/addons-214441/id_rsa Username:docker}
	I0929 11:31:17.628308  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHKeyPath
	I0929 11:31:17.628533  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHUsername
	I0929 11:31:17.628572  595895 main.go:141] libmachine: (addons-214441) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:9c:d8", ip: ""} in network mk-addons-214441: {Iface:virbr1 ExpiryTime:2025-09-29 12:30:43 +0000 UTC Type:0 Mac:52:54:00:98:9c:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:addons-214441 Clientid:01:52:54:00:98:9c:d8}
	I0929 11:31:17.628735  595895 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21654-591397/.minikube/machines/addons-214441/id_rsa Username:docker}
	I0929 11:31:17.628848  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined IP address 192.168.39.76 and MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:31:17.629214  595895 main.go:141] libmachine: (addons-214441) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:9c:d8", ip: ""} in network mk-addons-214441: {Iface:virbr1 ExpiryTime:2025-09-29 12:30:43 +0000 UTC Type:0 Mac:52:54:00:98:9c:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:addons-214441 Clientid:01:52:54:00:98:9c:d8}
	I0929 11:31:17.629249  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined IP address 192.168.39.76 and MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:31:17.629266  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHPort
	I0929 11:31:17.629512  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHKeyPath
	I0929 11:31:17.629592  595895 main.go:141] libmachine: (addons-214441) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:9c:d8", ip: ""} in network mk-addons-214441: {Iface:virbr1 ExpiryTime:2025-09-29 12:30:43 +0000 UTC Type:0 Mac:52:54:00:98:9c:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:addons-214441 Clientid:01:52:54:00:98:9c:d8}
	I0929 11:31:17.629606  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined IP address 192.168.39.76 and MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:31:17.629764  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHUsername
	I0929 11:31:17.629861  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHPort
	I0929 11:31:17.630008  595895 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21654-591397/.minikube/machines/addons-214441/id_rsa Username:docker}
	I0929 11:31:17.630062  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHKeyPath
	I0929 11:31:17.630142  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHPort
	I0929 11:31:17.630197  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHUsername
	I0929 11:31:17.630311  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHKeyPath
	I0929 11:31:17.630370  595895 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21654-591397/.minikube/machines/addons-214441/id_rsa Username:docker}
	I0929 11:31:17.630910  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHUsername
	I0929 11:31:17.631305  595895 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21654-591397/.minikube/machines/addons-214441/id_rsa Username:docker}
	I0929 11:31:17.631821  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:31:17.632272  595895 main.go:141] libmachine: (addons-214441) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:9c:d8", ip: ""} in network mk-addons-214441: {Iface:virbr1 ExpiryTime:2025-09-29 12:30:43 +0000 UTC Type:0 Mac:52:54:00:98:9c:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:addons-214441 Clientid:01:52:54:00:98:9c:d8}
	I0929 11:31:17.632296  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined IP address 192.168.39.76 and MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:31:17.632442  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:31:17.632503  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHPort
	I0929 11:31:17.632710  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:31:17.632789  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHKeyPath
	I0929 11:31:17.633084  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHUsername
	I0929 11:31:17.633162  595895 main.go:141] libmachine: (addons-214441) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:9c:d8", ip: ""} in network mk-addons-214441: {Iface:virbr1 ExpiryTime:2025-09-29 12:30:43 +0000 UTC Type:0 Mac:52:54:00:98:9c:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:addons-214441 Clientid:01:52:54:00:98:9c:d8}
	I0929 11:31:17.633176  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined IP address 192.168.39.76 and MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:31:17.633207  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:31:17.633228  595895 main.go:141] libmachine: (addons-214441) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:9c:d8", ip: ""} in network mk-addons-214441: {Iface:virbr1 ExpiryTime:2025-09-29 12:30:43 +0000 UTC Type:0 Mac:52:54:00:98:9c:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:addons-214441 Clientid:01:52:54:00:98:9c:d8}
	I0929 11:31:17.633242  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined IP address 192.168.39.76 and MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:31:17.633391  595895 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21654-591397/.minikube/machines/addons-214441/id_rsa Username:docker}
	I0929 11:31:17.633435  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHPort
	I0929 11:31:17.633557  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHPort
	I0929 11:31:17.633619  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHKeyPath
	I0929 11:31:17.633759  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHKeyPath
	I0929 11:31:17.633793  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHUsername
	I0929 11:31:17.634042  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHUsername
	I0929 11:31:17.634042  595895 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21654-591397/.minikube/machines/addons-214441/id_rsa Username:docker}
	I0929 11:31:17.634131  595895 main.go:141] libmachine: (addons-214441) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:9c:d8", ip: ""} in network mk-addons-214441: {Iface:virbr1 ExpiryTime:2025-09-29 12:30:43 +0000 UTC Type:0 Mac:52:54:00:98:9c:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:addons-214441 Clientid:01:52:54:00:98:9c:d8}
	I0929 11:31:17.634164  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined IP address 192.168.39.76 and MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:31:17.634219  595895 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21654-591397/.minikube/machines/addons-214441/id_rsa Username:docker}
	I0929 11:31:17.634716  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHPort
	I0929 11:31:17.634894  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHKeyPath
	I0929 11:31:17.635088  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHUsername
	I0929 11:31:17.635265  595895 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21654-591397/.minikube/machines/addons-214441/id_rsa Username:docker}
	W0929 11:31:17.919750  595895 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:44698->192.168.39.76:22: read: connection reset by peer
	I0929 11:31:17.919798  595895 retry.go:31] will retry after 127.603101ms: ssh: handshake failed: read tcp 192.168.39.1:44698->192.168.39.76:22: read: connection reset by peer
	W0929 11:31:17.927998  595895 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:44716->192.168.39.76:22: read: connection reset by peer
	I0929 11:31:17.928034  595895 retry.go:31] will retry after 352.316454ms: ssh: handshake failed: read tcp 192.168.39.1:44716->192.168.39.76:22: read: connection reset by peer
	I0929 11:31:18.834850  595895 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0929 11:31:18.834892  595895 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0929 11:31:18.867206  595895 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0929 11:31:18.867237  595895 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0929 11:31:18.998018  595895 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0929 11:31:19.019969  595895 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (1.57851512s)
	I0929 11:31:19.019988  595895 ssh_runner.go:235] Completed: sudo systemctl daemon-reload: (1.56567428s)
	I0929 11:31:19.020058  595895 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0929 11:31:19.020195  595895 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0929 11:31:19.047383  595895 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0929 11:31:19.178551  595895 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0929 11:31:19.194460  595895 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0929 11:31:19.203493  595895 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 11:31:19.224634  595895 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0929 11:31:19.236908  595895 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I0929 11:31:19.236937  595895 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I0929 11:31:19.339094  595895 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0929 11:31:19.470368  595895 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0929 11:31:19.470407  595895 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0929 11:31:19.482955  595895 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0929 11:31:19.507279  595895 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0929 11:31:19.533452  595895 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0929 11:31:19.533481  595895 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0929 11:31:19.580275  595895 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0929 11:31:19.580310  595895 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0929 11:31:19.612191  595895 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0929 11:31:19.612228  595895 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0929 11:31:19.656222  595895 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I0929 11:31:19.656250  595895 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0929 11:31:19.707608  595895 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I0929 11:31:19.720943  595895 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 11:31:19.949642  595895 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0929 11:31:19.949675  595895 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0929 11:31:20.010236  595895 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0929 11:31:20.010269  595895 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0929 11:31:20.143152  595895 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0929 11:31:20.143179  595895 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0929 11:31:20.164194  595895 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0929 11:31:20.164223  595895 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0929 11:31:20.178619  595895 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0929 11:31:20.178652  595895 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0929 11:31:20.352326  595895 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0929 11:31:20.352354  595895 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0929 11:31:20.399905  595895 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0929 11:31:20.399935  595895 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0929 11:31:20.528800  595895 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0929 11:31:20.554026  595895 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0929 11:31:20.608085  595895 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0929 11:31:20.608132  595895 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0929 11:31:20.855879  595895 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0929 11:31:20.901072  595895 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0929 11:31:20.901124  595895 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0929 11:31:21.046874  595895 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0929 11:31:21.046903  595895 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0929 11:31:21.279957  595895 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0929 11:31:21.279985  595895 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0929 11:31:21.494633  595895 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0929 11:31:21.494662  595895 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0929 11:31:21.896279  595895 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0929 11:31:22.355612  595895 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0929 11:31:22.355644  595895 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0929 11:31:23.136046  595895 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0929 11:31:23.136083  595895 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0929 11:31:23.742895  595895 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0929 11:31:23.742921  595895 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0929 11:31:24.397559  595895 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0929 11:31:24.397588  595895 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0929 11:31:24.806696  595895 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0929 11:31:24.806729  595895 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0929 11:31:25.028630  595895 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0929 11:31:25.028675  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHHostname
	I0929 11:31:25.032868  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:31:25.033494  595895 main.go:141] libmachine: (addons-214441) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:9c:d8", ip: ""} in network mk-addons-214441: {Iface:virbr1 ExpiryTime:2025-09-29 12:30:43 +0000 UTC Type:0 Mac:52:54:00:98:9c:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:addons-214441 Clientid:01:52:54:00:98:9c:d8}
	I0929 11:31:25.033526  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined IP address 192.168.39.76 and MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:31:25.033760  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHPort
	I0929 11:31:25.034027  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHKeyPath
	I0929 11:31:25.034259  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHUsername
	I0929 11:31:25.034422  595895 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21654-591397/.minikube/machines/addons-214441/id_rsa Username:docker}
	I0929 11:31:25.610330  595895 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0929 11:31:25.954809  595895 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0929 11:31:26.260607  595895 addons.go:238] Setting addon gcp-auth=true in "addons-214441"
	I0929 11:31:26.260695  595895 host.go:66] Checking if "addons-214441" exists ...
	I0929 11:31:26.261024  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:31:26.261068  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:31:26.276135  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36421
	I0929 11:31:26.276726  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:31:26.277323  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:31:26.277354  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:31:26.277924  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:31:26.278456  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:31:26.278490  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:31:26.293277  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40243
	I0929 11:31:26.293786  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:31:26.294319  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:31:26.294344  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:31:26.294858  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:31:26.295136  595895 main.go:141] libmachine: (addons-214441) Calling .GetState
	I0929 11:31:26.297279  595895 main.go:141] libmachine: (addons-214441) Calling .DriverName
	I0929 11:31:26.297583  595895 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0929 11:31:26.297612  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHHostname
	I0929 11:31:26.301409  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:31:26.302065  595895 main.go:141] libmachine: (addons-214441) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:9c:d8", ip: ""} in network mk-addons-214441: {Iface:virbr1 ExpiryTime:2025-09-29 12:30:43 +0000 UTC Type:0 Mac:52:54:00:98:9c:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:addons-214441 Clientid:01:52:54:00:98:9c:d8}
	I0929 11:31:26.302093  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined IP address 192.168.39.76 and MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:31:26.302272  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHPort
	I0929 11:31:26.302474  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHKeyPath
	I0929 11:31:26.302636  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHUsername
	I0929 11:31:26.302830  595895 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21654-591397/.minikube/machines/addons-214441/id_rsa Username:docker}
	I0929 11:31:26.648618  595895 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.65053686s)
	I0929 11:31:26.648643  595895 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (7.628556534s)
	I0929 11:31:26.648693  595895 main.go:141] libmachine: Making call to close driver server
	I0929 11:31:26.648703  595895 main.go:141] libmachine: (addons-214441) Calling .Close
	I0929 11:31:26.648707  595895 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (7.628486823s)
	I0929 11:31:26.648740  595895 start.go:976] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0929 11:31:26.648855  595895 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (7.601423652s)
	I0929 11:31:26.648889  595895 main.go:141] libmachine: Making call to close driver server
	I0929 11:31:26.648898  595895 main.go:141] libmachine: (addons-214441) Calling .Close
	I0929 11:31:26.649041  595895 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:31:26.649056  595895 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:31:26.649066  595895 main.go:141] libmachine: Making call to close driver server
	I0929 11:31:26.649073  595895 main.go:141] libmachine: (addons-214441) Calling .Close
	I0929 11:31:26.649181  595895 main.go:141] libmachine: (addons-214441) DBG | Closing plugin on server side
	I0929 11:31:26.649214  595895 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:31:26.649225  595895 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:31:26.649256  595895 main.go:141] libmachine: Making call to close driver server
	I0929 11:31:26.649265  595895 main.go:141] libmachine: (addons-214441) Calling .Close
	I0929 11:31:26.649555  595895 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:31:26.649585  595895 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:31:26.649698  595895 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:31:26.649728  595895 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:31:26.649741  595895 node_ready.go:35] waiting up to 6m0s for node "addons-214441" to be "Ready" ...
	I0929 11:31:26.649625  595895 main.go:141] libmachine: (addons-214441) DBG | Closing plugin on server side
	I0929 11:31:26.649665  595895 main.go:141] libmachine: (addons-214441) DBG | Closing plugin on server side
	I0929 11:31:26.797678  595895 node_ready.go:49] node "addons-214441" is "Ready"
	I0929 11:31:26.797712  595895 node_ready.go:38] duration metric: took 147.94134ms for node "addons-214441" to be "Ready" ...
	I0929 11:31:26.797735  595895 api_server.go:52] waiting for apiserver process to appear ...
	I0929 11:31:26.797797  595895 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0929 11:31:27.078868  595895 main.go:141] libmachine: Making call to close driver server
	I0929 11:31:27.078896  595895 main.go:141] libmachine: (addons-214441) Calling .Close
	I0929 11:31:27.079284  595895 main.go:141] libmachine: (addons-214441) DBG | Closing plugin on server side
	I0929 11:31:27.079351  595895 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:31:27.079372  595895 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:31:27.220384  595895 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-214441" context rescaled to 1 replicas
	I0929 11:31:30.522194  595895 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (11.34358993s)
	I0929 11:31:30.522263  595895 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (11.327765304s)
	I0929 11:31:30.522284  595895 main.go:141] libmachine: Making call to close driver server
	I0929 11:31:30.522297  595895 main.go:141] libmachine: (addons-214441) Calling .Close
	I0929 11:31:30.522297  595895 main.go:141] libmachine: Making call to close driver server
	I0929 11:31:30.522308  595895 main.go:141] libmachine: (addons-214441) Calling .Close
	I0929 11:31:30.522336  595895 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (11.318803941s)
	I0929 11:31:30.522386  595895 main.go:141] libmachine: Making call to close driver server
	I0929 11:31:30.522398  595895 main.go:141] libmachine: (addons-214441) Calling .Close
	I0929 11:31:30.522641  595895 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:31:30.522658  595895 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:31:30.522685  595895 main.go:141] libmachine: Making call to close driver server
	I0929 11:31:30.522695  595895 main.go:141] libmachine: (addons-214441) Calling .Close
	I0929 11:31:30.522794  595895 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:31:30.522804  595895 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:31:30.522813  595895 main.go:141] libmachine: Making call to close driver server
	I0929 11:31:30.522819  595895 main.go:141] libmachine: (addons-214441) Calling .Close
	I0929 11:31:30.522874  595895 main.go:141] libmachine: (addons-214441) DBG | Closing plugin on server side
	I0929 11:31:30.522863  595895 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:31:30.522905  595895 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:31:30.522914  595895 main.go:141] libmachine: Making call to close driver server
	I0929 11:31:30.522922  595895 main.go:141] libmachine: (addons-214441) Calling .Close
	I0929 11:31:30.522952  595895 main.go:141] libmachine: (addons-214441) DBG | Closing plugin on server side
	I0929 11:31:30.522984  595895 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:31:30.522990  595895 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:31:30.523183  595895 main.go:141] libmachine: (addons-214441) DBG | Closing plugin on server side
	I0929 11:31:30.523188  595895 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:31:30.523205  595895 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:31:30.523212  595895 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:31:30.523216  595895 addons.go:479] Verifying addon ingress=true in "addons-214441"
	I0929 11:31:30.523222  595895 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:31:30.527182  595895 out.go:179] * Verifying ingress addon...
	I0929 11:31:30.529738  595895 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0929 11:31:30.708830  595895 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0929 11:31:30.708859  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:31.235125  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:31.629964  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:32.068126  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:32.586294  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:33.055440  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:33.661344  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:33.865322  595895 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (14.640641229s)
	I0929 11:31:33.865361  595895 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (14.526214451s)
	I0929 11:31:33.865396  595895 main.go:141] libmachine: Making call to close driver server
	I0929 11:31:33.865407  595895 main.go:141] libmachine: Making call to close driver server
	I0929 11:31:33.865413  595895 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (14.382417731s)
	I0929 11:31:33.865425  595895 main.go:141] libmachine: (addons-214441) Calling .Close
	I0929 11:31:33.865448  595895 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (14.358144157s)
	I0929 11:31:33.865456  595895 main.go:141] libmachine: Making call to close driver server
	I0929 11:31:33.865470  595895 main.go:141] libmachine: (addons-214441) Calling .Close
	I0929 11:31:33.865472  595895 main.go:141] libmachine: Making call to close driver server
	I0929 11:31:33.865527  595895 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (14.157883934s)
	I0929 11:31:33.865528  595895 main.go:141] libmachine: (addons-214441) Calling .Close
	I0929 11:31:33.865545  595895 main.go:141] libmachine: Making call to close driver server
	I0929 11:31:33.865554  595895 main.go:141] libmachine: (addons-214441) Calling .Close
	I0929 11:31:33.865410  595895 main.go:141] libmachine: (addons-214441) Calling .Close
	I0929 11:31:33.865659  595895 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (14.144676501s)
	W0929 11:31:33.865707  595895 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:31:33.865740  595895 retry.go:31] will retry after 127.952259ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:31:33.865790  595895 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (13.336965067s)
	I0929 11:31:33.865796  595895 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:31:33.865807  595895 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:31:33.865810  595895 main.go:141] libmachine: Making call to close driver server
	I0929 11:31:33.865818  595895 main.go:141] libmachine: Making call to close driver server
	I0929 11:31:33.865821  595895 main.go:141] libmachine: (addons-214441) Calling .Close
	I0929 11:31:33.865826  595895 main.go:141] libmachine: (addons-214441) Calling .Close
	I0929 11:31:33.865864  595895 main.go:141] libmachine: (addons-214441) DBG | Closing plugin on server side
	I0929 11:31:33.865883  595895 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:31:33.865895  595895 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:31:33.865895  595895 main.go:141] libmachine: (addons-214441) DBG | Closing plugin on server side
	I0929 11:31:33.865906  595895 main.go:141] libmachine: Making call to close driver server
	I0929 11:31:33.865922  595895 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:31:33.865928  595895 main.go:141] libmachine: (addons-214441) Calling .Close
	I0929 11:31:33.865931  595895 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:31:33.865939  595895 main.go:141] libmachine: Making call to close driver server
	I0929 11:31:33.865945  595895 main.go:141] libmachine: (addons-214441) Calling .Close
	I0929 11:31:33.865960  595895 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (13.311901558s)
	I0929 11:31:33.865978  595895 main.go:141] libmachine: Making call to close driver server
	I0929 11:31:33.865986  595895 main.go:141] libmachine: (addons-214441) Calling .Close
	I0929 11:31:33.866077  595895 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (13.010152282s)
	I0929 11:31:33.866096  595895 main.go:141] libmachine: Making call to close driver server
	I0929 11:31:33.866124  595895 main.go:141] libmachine: (addons-214441) Calling .Close
	I0929 11:31:33.866162  595895 main.go:141] libmachine: (addons-214441) DBG | Closing plugin on server side
	I0929 11:31:33.866187  595895 main.go:141] libmachine: (addons-214441) DBG | Closing plugin on server side
	I0929 11:31:33.866214  595895 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:31:33.866223  595895 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:31:33.866230  595895 main.go:141] libmachine: Making call to close driver server
	I0929 11:31:33.866237  595895 main.go:141] libmachine: (addons-214441) Calling .Close
	I0929 11:31:33.866283  595895 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (11.969964695s)
	W0929 11:31:33.866347  595895 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	Warning: unrecognized format "int64"
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0929 11:31:33.866370  595895 retry.go:31] will retry after 213.926415ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	Warning: unrecognized format "int64"
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0929 11:31:33.866587  595895 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:31:33.866618  595895 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:31:33.866622  595895 main.go:141] libmachine: (addons-214441) DBG | Closing plugin on server side
	I0929 11:31:33.866627  595895 main.go:141] libmachine: Making call to close driver server
	I0929 11:31:33.866630  595895 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:31:33.866636  595895 main.go:141] libmachine: (addons-214441) Calling .Close
	I0929 11:31:33.866640  595895 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:31:33.866651  595895 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:31:33.866662  595895 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:31:33.866606  595895 main.go:141] libmachine: (addons-214441) DBG | Closing plugin on server side
	I0929 11:31:33.866736  595895 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:31:33.866752  595895 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:31:33.866766  595895 main.go:141] libmachine: Making call to close driver server
	I0929 11:31:33.866780  595895 main.go:141] libmachine: (addons-214441) Calling .Close
	I0929 11:31:33.866875  595895 main.go:141] libmachine: (addons-214441) DBG | Closing plugin on server side
	I0929 11:31:33.866910  595895 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:31:33.866925  595895 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:31:33.867202  595895 main.go:141] libmachine: (addons-214441) DBG | Closing plugin on server side
	I0929 11:31:33.867264  595895 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:31:33.867284  595895 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:31:33.867303  595895 main.go:141] libmachine: Making call to close driver server
	I0929 11:31:33.867339  595895 main.go:141] libmachine: (addons-214441) Calling .Close
	I0929 11:31:33.867618  595895 main.go:141] libmachine: (addons-214441) DBG | Closing plugin on server side
	I0929 11:31:33.867761  595895 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:31:33.867769  595895 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:31:33.867778  595895 addons.go:479] Verifying addon registry=true in "addons-214441"
	I0929 11:31:33.868269  595895 main.go:141] libmachine: (addons-214441) DBG | Closing plugin on server side
	I0929 11:31:33.868300  595895 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:31:33.868305  595895 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:31:33.868451  595895 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:31:33.868463  595895 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:31:33.868472  595895 main.go:141] libmachine: Making call to close driver server
	I0929 11:31:33.868479  595895 main.go:141] libmachine: (addons-214441) Calling .Close
	I0929 11:31:33.869037  595895 main.go:141] libmachine: (addons-214441) DBG | Closing plugin on server side
	I0929 11:31:33.869070  595895 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:31:33.869076  595895 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:31:33.869084  595895 addons.go:479] Verifying addon metrics-server=true in "addons-214441"
	I0929 11:31:33.869798  595895 main.go:141] libmachine: (addons-214441) DBG | Closing plugin on server side
	I0929 11:31:33.869839  595895 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:31:33.869847  595895 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:31:33.869975  595895 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:31:33.870031  595895 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:31:33.871564  595895 out.go:179] * Verifying registry addon...
	I0929 11:31:33.872479  595895 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-214441 service yakd-dashboard -n yakd-dashboard
	
	I0929 11:31:33.874294  595895 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0929 11:31:33.993863  595895 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0929 11:31:33.993900  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:33.994009  595895 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 11:31:34.081538  595895 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0929 11:31:34.115447  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:34.146570  595895 main.go:141] libmachine: Making call to close driver server
	I0929 11:31:34.146609  595895 main.go:141] libmachine: (addons-214441) Calling .Close
	I0929 11:31:34.146947  595895 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:31:34.146967  595895 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:31:34.413578  595895 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (8.803181451s)
	I0929 11:31:34.413616  595895 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (8.116003731s)
	I0929 11:31:34.413656  595895 main.go:141] libmachine: Making call to close driver server
	I0929 11:31:34.413669  595895 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (7.615843233s)
	I0929 11:31:34.413709  595895 api_server.go:72] duration metric: took 16.972266985s to wait for apiserver process to appear ...
	I0929 11:31:34.413722  595895 api_server.go:88] waiting for apiserver healthz status ...
	I0929 11:31:34.413750  595895 api_server.go:253] Checking apiserver healthz at https://192.168.39.76:8443/healthz ...
	I0929 11:31:34.413675  595895 main.go:141] libmachine: (addons-214441) Calling .Close
	I0929 11:31:34.414213  595895 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:31:34.414230  595895 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:31:34.414254  595895 main.go:141] libmachine: Making call to close driver server
	I0929 11:31:34.414261  595895 main.go:141] libmachine: (addons-214441) Calling .Close
	I0929 11:31:34.414511  595895 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:31:34.414529  595895 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:31:34.414543  595895 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-214441"
	I0929 11:31:34.415286  595895 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I0929 11:31:34.416180  595895 out.go:179] * Verifying csi-hostpath-driver addon...
	I0929 11:31:34.417833  595895 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0929 11:31:34.418933  595895 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0929 11:31:34.419343  595895 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0929 11:31:34.419365  595895 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0929 11:31:34.428017  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:34.435805  595895 api_server.go:279] https://192.168.39.76:8443/healthz returned 200:
	ok
	I0929 11:31:34.443092  595895 api_server.go:141] control plane version: v1.34.0
	I0929 11:31:34.443139  595895 api_server.go:131] duration metric: took 29.409177ms to wait for apiserver health ...
	I0929 11:31:34.443150  595895 system_pods.go:43] waiting for kube-system pods to appear ...
	I0929 11:31:34.495447  595895 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0929 11:31:34.495473  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:34.527406  595895 system_pods.go:59] 20 kube-system pods found
	I0929 11:31:34.527452  595895 system_pods.go:61] "amd-gpu-device-plugin-7jx7f" [97d8a167-36be-44b5-b99c-2b55db99df3e] Running
	I0929 11:31:34.527458  595895 system_pods.go:61] "coredns-66bc5c9577-fkh52" [bd9cc142-39db-4ed9-9ecc-3d270499136c] Running
	I0929 11:31:34.527463  595895 system_pods.go:61] "coredns-66bc5c9577-sgnb2" [343874b8-6c4d-4e36-8ebf-a379e3a93a98] Running
	I0929 11:31:34.527471  595895 system_pods.go:61] "csi-hostpath-attacher-0" [27d9af25-8d41-4c37-9359-c7bd4f88f09f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0929 11:31:34.527475  595895 system_pods.go:61] "csi-hostpath-resizer-0" [4f24e046-d5b2-41ff-a051-b0a572bf9348] Pending
	I0929 11:31:34.527484  595895 system_pods.go:61] "csi-hostpathplugin-8279f" [3cc6db25-521c-4711-9d36-e22ab6d16249] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0929 11:31:34.527490  595895 system_pods.go:61] "etcd-addons-214441" [1710b264-ccec-4054-8a1c-7d8e5ce49163] Running
	I0929 11:31:34.527494  595895 system_pods.go:61] "kube-apiserver-addons-214441" [302fdd61-6c51-4e6e-a1af-7856ccb9f2ca] Running
	I0929 11:31:34.527502  595895 system_pods.go:61] "kube-controller-manager-addons-214441" [3e9c3a44-d14e-4335-a141-f7c648e6159d] Running
	I0929 11:31:34.527507  595895 system_pods.go:61] "kube-ingress-dns-minikube" [12920e9d-debd-4783-9f79-1c3a345e576e] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0929 11:31:34.527513  595895 system_pods.go:61] "kube-proxy-d9fnb" [229af565-3300-4cb4-8289-f3d9b4a9af81] Running
	I0929 11:31:34.527520  595895 system_pods.go:61] "kube-scheduler-addons-214441" [bcaec156-9259-4787-9d02-01a2203b43ac] Running
	I0929 11:31:34.527524  595895 system_pods.go:61] "metrics-server-85b7d694d7-zlrv7" [003bc9b9-be00-4e08-8af7-3443d0502181] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0929 11:31:34.527533  595895 system_pods.go:61] "nvidia-device-plugin-daemonset-x7b8m" [b96ad59d-d30d-4437-a5c7-ce0c5fc69348] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0929 11:31:34.527541  595895 system_pods.go:61] "registry-66898fdd98-d7zx7" [e0282924-ef3d-48eb-8906-ea10f183b39e] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0929 11:31:34.527547  595895 system_pods.go:61] "registry-creds-764b6fb674-td8pw" [c6001777-2f97-49d8-972b-27d373557795] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0929 11:31:34.527557  595895 system_pods.go:61] "registry-proxy-grb7m" [d79830e7-a640-46a7-9b07-38e39aceac96] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0929 11:31:34.527562  595895 system_pods.go:61] "snapshot-controller-7d9fbc56b8-pw4g9" [74b07f42-da97-4ad5-8fdb-748ab5cfbde2] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0929 11:31:34.527571  595895 system_pods.go:61] "snapshot-controller-7d9fbc56b8-wvh2l" [987a4a45-4d77-43ac-816c-ebc08faf51bc] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0929 11:31:34.527575  595895 system_pods.go:61] "storage-provisioner" [e1a79041-edc7-4f8e-96cb-8b3566893a0e] Running
	I0929 11:31:34.527582  595895 system_pods.go:74] duration metric: took 84.42539ms to wait for pod list to return data ...
	I0929 11:31:34.527594  595895 default_sa.go:34] waiting for default service account to be created ...
	I0929 11:31:34.549252  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:34.556947  595895 default_sa.go:45] found service account: "default"
	I0929 11:31:34.556977  595895 default_sa.go:55] duration metric: took 29.376735ms for default service account to be created ...
	I0929 11:31:34.556988  595895 system_pods.go:116] waiting for k8s-apps to be running ...
	I0929 11:31:34.596290  595895 system_pods.go:86] 20 kube-system pods found
	I0929 11:31:34.596322  595895 system_pods.go:89] "amd-gpu-device-plugin-7jx7f" [97d8a167-36be-44b5-b99c-2b55db99df3e] Running
	I0929 11:31:34.596330  595895 system_pods.go:89] "coredns-66bc5c9577-fkh52" [bd9cc142-39db-4ed9-9ecc-3d270499136c] Running
	I0929 11:31:34.596334  595895 system_pods.go:89] "coredns-66bc5c9577-sgnb2" [343874b8-6c4d-4e36-8ebf-a379e3a93a98] Running
	I0929 11:31:34.596343  595895 system_pods.go:89] "csi-hostpath-attacher-0" [27d9af25-8d41-4c37-9359-c7bd4f88f09f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0929 11:31:34.596349  595895 system_pods.go:89] "csi-hostpath-resizer-0" [4f24e046-d5b2-41ff-a051-b0a572bf9348] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0929 11:31:34.596357  595895 system_pods.go:89] "csi-hostpathplugin-8279f" [3cc6db25-521c-4711-9d36-e22ab6d16249] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0929 11:31:34.596361  595895 system_pods.go:89] "etcd-addons-214441" [1710b264-ccec-4054-8a1c-7d8e5ce49163] Running
	I0929 11:31:34.596365  595895 system_pods.go:89] "kube-apiserver-addons-214441" [302fdd61-6c51-4e6e-a1af-7856ccb9f2ca] Running
	I0929 11:31:34.596369  595895 system_pods.go:89] "kube-controller-manager-addons-214441" [3e9c3a44-d14e-4335-a141-f7c648e6159d] Running
	I0929 11:31:34.596375  595895 system_pods.go:89] "kube-ingress-dns-minikube" [12920e9d-debd-4783-9f79-1c3a345e576e] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0929 11:31:34.596381  595895 system_pods.go:89] "kube-proxy-d9fnb" [229af565-3300-4cb4-8289-f3d9b4a9af81] Running
	I0929 11:31:34.596385  595895 system_pods.go:89] "kube-scheduler-addons-214441" [bcaec156-9259-4787-9d02-01a2203b43ac] Running
	I0929 11:31:34.596390  595895 system_pods.go:89] "metrics-server-85b7d694d7-zlrv7" [003bc9b9-be00-4e08-8af7-3443d0502181] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0929 11:31:34.596398  595895 system_pods.go:89] "nvidia-device-plugin-daemonset-x7b8m" [b96ad59d-d30d-4437-a5c7-ce0c5fc69348] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0929 11:31:34.596404  595895 system_pods.go:89] "registry-66898fdd98-d7zx7" [e0282924-ef3d-48eb-8906-ea10f183b39e] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0929 11:31:34.596409  595895 system_pods.go:89] "registry-creds-764b6fb674-td8pw" [c6001777-2f97-49d8-972b-27d373557795] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0929 11:31:34.596413  595895 system_pods.go:89] "registry-proxy-grb7m" [d79830e7-a640-46a7-9b07-38e39aceac96] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0929 11:31:34.596421  595895 system_pods.go:89] "snapshot-controller-7d9fbc56b8-pw4g9" [74b07f42-da97-4ad5-8fdb-748ab5cfbde2] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0929 11:31:34.596427  595895 system_pods.go:89] "snapshot-controller-7d9fbc56b8-wvh2l" [987a4a45-4d77-43ac-816c-ebc08faf51bc] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0929 11:31:34.596430  595895 system_pods.go:89] "storage-provisioner" [e1a79041-edc7-4f8e-96cb-8b3566893a0e] Running
	I0929 11:31:34.596439  595895 system_pods.go:126] duration metric: took 39.444621ms to wait for k8s-apps to be running ...
	I0929 11:31:34.596450  595895 system_svc.go:44] waiting for kubelet service to be running ....
	I0929 11:31:34.596507  595895 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0929 11:31:34.638029  595895 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0929 11:31:34.638063  595895 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0929 11:31:34.896745  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:35.000193  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:35.038316  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:35.057490  595895 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0929 11:31:35.057521  595895 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0929 11:31:35.300242  595895 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0929 11:31:35.379546  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:35.428677  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:35.535091  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:35.881406  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:35.938231  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:36.039311  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:36.382155  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:36.425663  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:36.535684  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:36.886954  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:36.927490  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:37.044975  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:37.382165  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:37.431026  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:37.547302  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:37.920673  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:37.944368  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:38.063651  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:38.330176  595895 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (4.336121933s)
	W0929 11:31:38.330254  595895 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:31:38.330284  595895 retry.go:31] will retry after 312.007159ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:31:38.330290  595895 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.248696545s)
	I0929 11:31:38.330341  595895 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (3.73381029s)
	I0929 11:31:38.330367  595895 system_svc.go:56] duration metric: took 3.733914032s WaitForService to wait for kubelet
	I0929 11:31:38.330377  595895 kubeadm.go:578] duration metric: took 20.888935766s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0929 11:31:38.330403  595895 node_conditions.go:102] verifying NodePressure condition ...
	I0929 11:31:38.330343  595895 main.go:141] libmachine: Making call to close driver server
	I0929 11:31:38.330449  595895 main.go:141] libmachine: (addons-214441) Calling .Close
	I0929 11:31:38.330448  595895 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (3.030164486s)
	I0929 11:31:38.330495  595895 main.go:141] libmachine: Making call to close driver server
	I0929 11:31:38.330509  595895 main.go:141] libmachine: (addons-214441) Calling .Close
	I0929 11:31:38.330817  595895 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:31:38.330832  595895 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:31:38.330841  595895 main.go:141] libmachine: Making call to close driver server
	I0929 11:31:38.330848  595895 main.go:141] libmachine: (addons-214441) Calling .Close
	I0929 11:31:38.330851  595895 main.go:141] libmachine: (addons-214441) DBG | Closing plugin on server side
	I0929 11:31:38.330882  595895 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:31:38.330895  595895 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:31:38.330903  595895 main.go:141] libmachine: Making call to close driver server
	I0929 11:31:38.330910  595895 main.go:141] libmachine: (addons-214441) Calling .Close
	I0929 11:31:38.331221  595895 main.go:141] libmachine: (addons-214441) DBG | Closing plugin on server side
	I0929 11:31:38.331223  595895 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:31:38.331238  595895 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:31:38.331251  595895 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:31:38.331258  595895 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:31:38.332465  595895 addons.go:479] Verifying addon gcp-auth=true in "addons-214441"
	I0929 11:31:38.334695  595895 out.go:179] * Verifying gcp-auth addon...
	I0929 11:31:38.336858  595895 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0929 11:31:38.341614  595895 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0929 11:31:38.341645  595895 node_conditions.go:123] node cpu capacity is 2
	I0929 11:31:38.341662  595895 node_conditions.go:105] duration metric: took 11.25287ms to run NodePressure ...
	I0929 11:31:38.341688  595895 start.go:241] waiting for startup goroutines ...
	I0929 11:31:38.343873  595895 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0929 11:31:38.343896  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:38.381193  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:38.423947  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:38.537472  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:38.642514  595895 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 11:31:38.843272  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:38.944959  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:38.945123  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:39.033029  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:39.342350  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:39.380435  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:39.424230  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:39.537307  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:39.645310  595895 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.002737784s)
	W0929 11:31:39.645357  595895 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:31:39.645385  595895 retry.go:31] will retry after 298.904966ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:31:39.841477  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:39.879072  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:39.922915  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:39.945025  595895 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 11:31:40.034681  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:40.343272  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:40.382403  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:40.422942  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:40.539442  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:40.844610  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:40.879893  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:40.924951  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:41.033826  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:41.124246  595895 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.179166796s)
	W0929 11:31:41.124315  595895 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:31:41.124339  595895 retry.go:31] will retry after 649.538473ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:31:41.343005  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:41.380641  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:41.425734  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:41.533709  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:41.774560  595895 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 11:31:41.841236  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:41.878527  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:41.924650  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:42.035789  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:42.342468  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:42.380731  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:42.426156  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:42.534471  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:42.785912  595895 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.011289133s)
	W0929 11:31:42.785977  595895 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:31:42.786005  595895 retry.go:31] will retry after 983.289132ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:31:42.842132  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:42.879170  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:42.924415  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:43.036251  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:43.343664  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:43.382521  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:43.423598  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:43.534301  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:43.770317  595895 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 11:31:43.843700  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:43.880339  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:43.925260  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:44.035702  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:44.342152  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:44.380186  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:44.427570  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:44.537930  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:44.812756  595895 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.042397237s)
	W0929 11:31:44.812812  595895 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:31:44.812836  595895 retry.go:31] will retry after 2.137947671s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:31:44.843045  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:44.881899  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:44.924762  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:45.035718  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:45.343550  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:45.378897  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:45.424866  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:45.534338  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:45.841433  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:45.877671  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:45.923645  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:46.034379  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:46.372337  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:46.406356  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:46.426866  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:46.534032  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:46.842343  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:46.879578  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:46.925175  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:46.951146  595895 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 11:31:47.034343  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:47.344240  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:47.382773  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:47.424668  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:47.540037  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:47.843427  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:47.879391  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:47.924262  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:47.960092  595895 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.008893629s)
	W0929 11:31:47.960177  595895 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:31:47.960206  595895 retry.go:31] will retry after 2.504757299s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:31:48.033591  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:48.341481  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:48.378697  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:48.424514  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:48.536592  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:48.842185  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:48.879742  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:48.923614  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:49.034098  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:49.340781  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:49.379506  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:49.423231  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:49.534207  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:49.842436  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:49.877896  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:49.924231  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:50.034614  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:50.341556  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:50.379007  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:50.423685  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:50.465827  595895 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 11:31:50.536792  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:50.843824  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:50.879454  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:50.924711  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:51.035609  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:51.343958  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:51.379841  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:51.424239  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:51.468054  595895 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.002171892s)
	W0929 11:31:51.468114  595895 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:31:51.468140  595895 retry.go:31] will retry after 5.613548218s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:31:51.533585  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:51.963029  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:51.963886  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:51.964026  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:52.060713  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:52.343223  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:52.378836  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:52.424767  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:52.534427  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:52.849585  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:52.879670  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:52.948684  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:53.048366  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:53.346453  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:53.380741  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:53.426760  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:53.533978  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:53.840987  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:53.879766  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:53.924223  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:54.035753  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:54.342742  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:54.378763  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:54.423439  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:54.535260  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:54.841859  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:54.880183  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:54.925299  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:55.033854  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:55.340853  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:55.378822  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:55.424172  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:55.534313  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:55.842189  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:55.879647  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:55.925521  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:56.034145  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:56.341524  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:56.384803  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:56.424070  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:56.533658  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:56.845007  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:56.881917  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:56.944166  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:57.044730  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:57.082647  595895 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 11:31:57.345840  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:57.379131  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:57.425387  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:57.534328  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:57.843711  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:57.879327  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:57.925624  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:58.038058  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:58.345139  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:58.379479  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:58.427479  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:58.431242  595895 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.348544969s)
	W0929 11:31:58.431293  595895 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:31:58.431314  595895 retry.go:31] will retry after 5.599503168s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:31:58.535825  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:58.841717  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:58.878293  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:58.926559  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:59.035878  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:59.341486  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:59.381532  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:59.425077  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:59.532752  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:59.841172  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:59.878180  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:59.923096  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:00.034481  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:00.557941  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:00.559858  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:32:00.559963  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:00.560670  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:00.841990  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:00.879357  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:32:00.926097  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:01.036394  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:01.344642  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:01.379875  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:32:01.425784  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:01.534466  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:01.842499  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:01.878243  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:32:01.924047  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:02.033958  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:02.342377  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:02.380154  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:32:02.423813  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:02.535090  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:02.843862  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:02.879556  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:32:02.924521  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:03.034759  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:03.340099  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:03.378625  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:32:03.423534  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:03.534511  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:03.841201  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:03.878471  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:32:03.924393  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:04.031608  595895 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 11:32:04.037031  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:04.344499  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:04.378709  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:32:04.426297  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:04.536239  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:04.842255  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:04.878783  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:32:04.925876  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:05.037628  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:05.250099  595895 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.218439403s)
	W0929 11:32:05.250163  595895 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:32:05.250186  595895 retry.go:31] will retry after 6.3969875s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:32:05.342875  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:05.380683  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:32:05.424490  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:05.534483  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:05.841804  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:05.880284  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:32:05.923385  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:06.034868  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:06.341952  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:06.378384  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:32:06.426408  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:06.535793  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:06.842154  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:06.880699  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:32:06.924358  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:07.035474  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:07.343686  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:07.378323  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:32:07.423762  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:07.535390  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:07.843851  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:07.881716  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:32:07.927684  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:08.037583  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:08.341340  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:08.380517  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:32:08.424488  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:08.535292  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:08.841002  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:08.879020  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:32:08.924253  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:09.089297  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:09.340800  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:09.377819  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:32:09.423823  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:09.534297  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:09.849243  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:09.950172  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:32:09.950267  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:10.036059  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:10.346922  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:10.379976  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:32:10.424634  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:10.538864  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:10.842015  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:10.879192  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:32:10.925328  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:11.040957  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:11.349029  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:11.380885  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:32:11.452716  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:11.533526  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:11.648223  595895 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 11:32:11.846882  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:11.881994  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:32:11.924898  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:12.037323  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:12.342006  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:12.378476  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:32:12.425404  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:12.544040  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:12.792386  595895 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.144111976s)
	W0929 11:32:12.792447  595895 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:32:12.792475  595895 retry.go:31] will retry after 13.411476283s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:32:12.842021  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:12.880179  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:32:12.924788  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:13.040328  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:13.342434  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:13.378229  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:32:13.423792  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:13.533728  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:13.843276  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:13.881114  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:32:13.924958  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:14.034759  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:14.342679  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:14.391569  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:32:14.496903  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:14.537421  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:14.843175  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:14.880166  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:32:14.923743  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:15.033994  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:15.343313  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:15.378881  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:32:15.423448  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:15.538003  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:15.845026  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:15.879663  595895 kapi.go:107] duration metric: took 42.005359357s to wait for kubernetes.io/minikube-addons=registry ...
	I0929 11:32:15.924537  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:16.034645  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:16.341847  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:16.423671  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:16.542699  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:16.844239  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:16.931285  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:17.038278  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:17.353396  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:17.429078  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:17.543634  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:17.844298  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:17.946425  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:18.041877  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:18.345833  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:18.428431  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:18.540908  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:18.840650  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:18.941953  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:19.044517  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:19.341978  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:19.424948  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:19.534807  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:19.839721  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:19.923994  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:20.033049  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:20.342737  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:20.425291  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:20.540624  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:20.844143  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:20.923381  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:21.034820  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:21.343509  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:21.423753  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:21.533929  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:21.841334  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:21.923232  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:22.035002  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:22.630689  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:22.632895  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:22.632941  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:22.845479  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:22.926876  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:23.038229  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:23.355255  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:23.427225  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:23.538625  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:23.844878  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:23.934777  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:24.035280  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:24.346419  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:24.423729  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:24.534589  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:24.842134  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:24.923902  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:25.034892  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:25.362314  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:25.488458  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:25.587385  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:25.861373  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:25.929934  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:26.034355  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:26.204639  595895 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 11:32:26.361386  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:26.429512  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:26.537022  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:26.843446  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:26.926054  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:27.035634  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:27.344336  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:27.424901  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:27.537642  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:27.644135  595895 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.439429306s)
	W0929 11:32:27.644198  595895 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:32:27.644227  595895 retry.go:31] will retry after 29.327619656s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:32:27.842768  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:27.923415  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:28.034767  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:28.343738  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:28.445503  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:28.546159  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:28.851845  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:28.927009  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:29.033400  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:29.341998  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:29.426197  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:29.537012  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:29.842012  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:29.924188  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:30.034037  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:30.346865  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:30.430853  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:30.542769  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:30.842367  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:30.922904  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:31.033768  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:31.341881  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:31.425338  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:31.535963  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:31.844006  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:31.924398  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:32.034705  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:32.346065  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:32.423672  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:32.534377  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:32.842447  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:32.925931  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:33.034800  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:33.387960  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:33.429171  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:33.546901  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:33.852519  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:33.953288  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:34.035154  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:34.344025  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:34.431259  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:34.536600  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:34.843653  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:34.927609  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:35.036794  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:35.341408  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:35.425312  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:35.541227  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:35.847181  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:35.947699  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:36.035760  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:36.344915  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:36.424144  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:36.535593  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:36.841859  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:36.924975  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:37.037919  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:37.452583  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:37.459370  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:37.537236  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:37.841013  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:37.923280  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:38.036969  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:38.340515  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:38.425769  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:38.549235  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:38.842439  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:38.925062  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:39.035751  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:39.341398  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:39.422778  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:39.534951  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:39.841870  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:39.925988  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:40.034408  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:40.340654  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:40.424350  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:40.535075  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:40.843236  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:40.924921  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:41.034406  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:41.497913  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:41.499293  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:41.535243  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:41.844020  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:41.923065  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:42.045660  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:42.342026  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:42.426493  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:42.535570  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:42.841485  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:42.923010  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:43.039027  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:43.346733  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:43.432195  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:43.540145  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:43.885089  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:43.972714  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:44.068027  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:44.345507  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:44.427061  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:44.535862  595895 kapi.go:107] duration metric: took 1m14.00612311s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0929 11:32:44.842493  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:44.929592  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:45.347246  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:45.424028  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:45.841905  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:45.923701  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:46.347078  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:46.425229  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:46.845817  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:46.925006  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:47.341259  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:47.426132  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:47.845143  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:47.924205  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:48.349502  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:48.452604  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:48.846442  595895 kapi.go:107] duration metric: took 1m10.509578031s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0929 11:32:48.847867  595895 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-214441 cluster.
	I0929 11:32:48.849227  595895 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0929 11:32:48.850374  595895 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0929 11:32:48.946549  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:49.426824  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:49.927802  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:50.426120  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:50.925871  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:51.426655  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:51.927170  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:52.426213  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:52.923791  595895 kapi.go:107] duration metric: took 1m18.504852087s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0929 11:32:56.972597  595895 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W0929 11:32:57.723998  595895 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:32:57.724041  595895 retry.go:31] will retry after 18.741816746s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:33:16.468501  595895 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W0929 11:33:17.218683  595895 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:33:17.218783  595895 main.go:141] libmachine: Making call to close driver server
	I0929 11:33:17.218797  595895 main.go:141] libmachine: (addons-214441) Calling .Close
	I0929 11:33:17.219140  595895 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:33:17.219161  595895 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:33:17.219172  595895 main.go:141] libmachine: Making call to close driver server
	I0929 11:33:17.219180  595895 main.go:141] libmachine: (addons-214441) Calling .Close
	I0929 11:33:17.219203  595895 main.go:141] libmachine: (addons-214441) DBG | Closing plugin on server side
	I0929 11:33:17.219480  595895 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:33:17.219502  595895 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:33:17.219534  595895 main.go:141] libmachine: (addons-214441) DBG | Closing plugin on server side
	W0929 11:33:17.219634  595895 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I0929 11:33:17.221637  595895 out.go:179] * Enabled addons: ingress-dns, storage-provisioner-rancher, storage-provisioner, cloud-spanner, volcano, amd-gpu-device-plugin, metrics-server, registry-creds, nvidia-device-plugin, yakd, default-storageclass, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0929 11:33:17.223007  595895 addons.go:514] duration metric: took 1m59.781528816s for enable addons: enabled=[ingress-dns storage-provisioner-rancher storage-provisioner cloud-spanner volcano amd-gpu-device-plugin metrics-server registry-creds nvidia-device-plugin yakd default-storageclass volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0929 11:33:17.223046  595895 start.go:246] waiting for cluster config update ...
	I0929 11:33:17.223066  595895 start.go:255] writing updated cluster config ...
	I0929 11:33:17.223379  595895 ssh_runner.go:195] Run: rm -f paused
	I0929 11:33:17.229885  595895 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0929 11:33:17.234611  595895 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-fkh52" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 11:33:17.240669  595895 pod_ready.go:94] pod "coredns-66bc5c9577-fkh52" is "Ready"
	I0929 11:33:17.240694  595895 pod_ready.go:86] duration metric: took 6.057488ms for pod "coredns-66bc5c9577-fkh52" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 11:33:17.243134  595895 pod_ready.go:83] waiting for pod "etcd-addons-214441" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 11:33:17.248977  595895 pod_ready.go:94] pod "etcd-addons-214441" is "Ready"
	I0929 11:33:17.249003  595895 pod_ready.go:86] duration metric: took 5.848678ms for pod "etcd-addons-214441" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 11:33:17.251694  595895 pod_ready.go:83] waiting for pod "kube-apiserver-addons-214441" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 11:33:17.257270  595895 pod_ready.go:94] pod "kube-apiserver-addons-214441" is "Ready"
	I0929 11:33:17.257299  595895 pod_ready.go:86] duration metric: took 5.583626ms for pod "kube-apiserver-addons-214441" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 11:33:17.259585  595895 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-214441" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 11:33:17.635253  595895 pod_ready.go:94] pod "kube-controller-manager-addons-214441" is "Ready"
	I0929 11:33:17.635287  595895 pod_ready.go:86] duration metric: took 375.675116ms for pod "kube-controller-manager-addons-214441" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 11:33:17.834921  595895 pod_ready.go:83] waiting for pod "kube-proxy-d9fnb" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 11:33:18.234706  595895 pod_ready.go:94] pod "kube-proxy-d9fnb" is "Ready"
	I0929 11:33:18.234735  595895 pod_ready.go:86] duration metric: took 399.786159ms for pod "kube-proxy-d9fnb" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 11:33:18.435590  595895 pod_ready.go:83] waiting for pod "kube-scheduler-addons-214441" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 11:33:18.834304  595895 pod_ready.go:94] pod "kube-scheduler-addons-214441" is "Ready"
	I0929 11:33:18.834340  595895 pod_ready.go:86] duration metric: took 398.719914ms for pod "kube-scheduler-addons-214441" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 11:33:18.834353  595895 pod_ready.go:40] duration metric: took 1.60442513s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0929 11:33:18.881427  595895 start.go:623] kubectl: 1.34.1, cluster: 1.34.0 (minor skew: 0)
	I0929 11:33:18.883901  595895 out.go:179] * Done! kubectl is now configured to use "addons-214441" cluster and "default" namespace by default
	
	
	==> Docker <==
	Sep 29 11:42:23 addons-214441 dockerd[1525]: time="2025-09-29T11:42:23.906528908Z" level=info msg="ignoring event" container=6c19c08a0c4b016f5ddf2b637ff411e873f5b82bd9522d934341ed0df582d7d9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 29 11:42:29 addons-214441 cri-dockerd[1389]: time="2025-09-29T11:42:29Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/be46dd12a568554d1b475c6c260164613702e2f5fa7bda6b80cac94904a8502c/resolv.conf as [nameserver 10.96.0.10 search local-path-storage.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Sep 29 11:42:29 addons-214441 dockerd[1525]: time="2025-09-29T11:42:29.801095357Z" level=warning msg="reference for unknown type: " digest="sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" remote="docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Sep 29 11:42:29 addons-214441 dockerd[1525]: time="2025-09-29T11:42:29.843576981Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 11:42:43 addons-214441 dockerd[1525]: time="2025-09-29T11:42:43.075566421Z" level=warning msg="reference for unknown type: " digest="sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" remote="docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Sep 29 11:42:43 addons-214441 dockerd[1525]: time="2025-09-29T11:42:43.112765795Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 11:42:45 addons-214441 dockerd[1525]: time="2025-09-29T11:42:45.154840245Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 11:43:03 addons-214441 dockerd[1525]: time="2025-09-29T11:43:03.153401267Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 11:43:09 addons-214441 dockerd[1525]: time="2025-09-29T11:43:09.074862693Z" level=warning msg="reference for unknown type: " digest="sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" remote="docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Sep 29 11:43:09 addons-214441 dockerd[1525]: time="2025-09-29T11:43:09.132638196Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 11:43:57 addons-214441 dockerd[1525]: time="2025-09-29T11:43:57.074866581Z" level=warning msg="reference for unknown type: " digest="sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" remote="docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Sep 29 11:43:57 addons-214441 dockerd[1525]: time="2025-09-29T11:43:57.185791327Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 11:43:57 addons-214441 cri-dockerd[1389]: time="2025-09-29T11:43:57Z" level=info msg="Stop pulling image docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79: docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79: Pulling from library/busybox"
	Sep 29 11:44:29 addons-214441 dockerd[1525]: time="2025-09-29T11:44:29.855693553Z" level=info msg="ignoring event" container=be46dd12a568554d1b475c6c260164613702e2f5fa7bda6b80cac94904a8502c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 29 11:45:00 addons-214441 cri-dockerd[1389]: time="2025-09-29T11:45:00Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/ec4ac1c4a59a99b911940e7471fd4d62bd648ddf20b864c871d76c778232c25f/resolv.conf as [nameserver 10.96.0.10 search local-path-storage.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Sep 29 11:45:00 addons-214441 dockerd[1525]: time="2025-09-29T11:45:00.392898188Z" level=warning msg="reference for unknown type: " digest="sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" remote="docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Sep 29 11:45:00 addons-214441 dockerd[1525]: time="2025-09-29T11:45:00.436740281Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 11:45:12 addons-214441 dockerd[1525]: time="2025-09-29T11:45:12.090833631Z" level=warning msg="reference for unknown type: " digest="sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" remote="docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Sep 29 11:45:12 addons-214441 dockerd[1525]: time="2025-09-29T11:45:12.136853848Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 11:45:21 addons-214441 dockerd[1525]: time="2025-09-29T11:45:21.055513809Z" level=info msg="ignoring event" container=ec4ac1c4a59a99b911940e7471fd4d62bd648ddf20b864c871d76c778232c25f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 29 11:45:34 addons-214441 dockerd[1525]: time="2025-09-29T11:45:34.176156809Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 11:45:46 addons-214441 dockerd[1525]: time="2025-09-29T11:45:46.027312687Z" level=info msg="Container failed to exit within 30s of signal 15 - using the force" container=31302c4317135dfadbabf2bf5a114745aba05b03367b93c097aab3ee4dda5549
	Sep 29 11:45:46 addons-214441 dockerd[1525]: time="2025-09-29T11:45:46.072925083Z" level=info msg="ignoring event" container=31302c4317135dfadbabf2bf5a114745aba05b03367b93c097aab3ee4dda5549 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 29 11:45:46 addons-214441 dockerd[1525]: time="2025-09-29T11:45:46.221164820Z" level=info msg="ignoring event" container=621898582dfa1d0008fac20d7d4c0701ae058713638593c938b29f4e124362a3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 29 11:45:48 addons-214441 dockerd[1525]: time="2025-09-29T11:45:48.168703145Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	8f0982c238973       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                                          6 minutes ago       Running             busybox                                  0                   66bafac6b9afb       busybox
	af544573fc0a7       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          13 minutes ago      Running             csi-snapshotter                          0                   02a7d350b8353       csi-hostpathplugin-8279f
	0ce41bd4faa5b       registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8                          13 minutes ago      Running             csi-provisioner                          0                   02a7d350b8353       csi-hostpathplugin-8279f
	a8b5f59d15a16       registry.k8s.io/sig-storage/livenessprobe@sha256:cacee2b5c36dd59d4c7e8469c05c9e4ef53ecb2df9025fa8c10cdaf61bce62f0                            13 minutes ago      Running             liveness-probe                           0                   02a7d350b8353       csi-hostpathplugin-8279f
	2514173d96a26       registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5                           13 minutes ago      Running             hostpath                                 0                   02a7d350b8353       csi-hostpathplugin-8279f
	9b5cb54a94a47       registry.k8s.io/ingress-nginx/controller@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef                             13 minutes ago      Running             controller                               0                   8b83af6a32772       ingress-nginx-controller-9cc49f96f-h99dj
	ef4f6e22ce31a       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:f1c25991bac2fbb7f5fcf91ed9438df31e30edee6bed5a780464238aa09ad24c                13 minutes ago      Running             node-driver-registrar                    0                   02a7d350b8353       csi-hostpathplugin-8279f
	5810f70edf860       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:80b9ba94aa2afe24553d69bd165a6a51552d1582d68618ec00d3b804a7d9193c   13 minutes ago      Running             csi-external-health-monitor-controller   0                   02a7d350b8353       csi-hostpathplugin-8279f
	51f0c139f4f77       registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7                              13 minutes ago      Running             csi-resizer                              0                   9e3b6780764f8       csi-hostpath-resizer-0
	e02a58717cc7c       registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b                             13 minutes ago      Running             csi-attacher                             0                   00ac4103d1658       csi-hostpath-attacher-0
	e805d753e363a       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      13 minutes ago      Running             volume-snapshot-controller               0                   5ef4f58a4b6da       snapshot-controller-7d9fbc56b8-pw4g9
	868179ee6252a       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      13 minutes ago      Running             volume-snapshot-controller               0                   34844f808604d       snapshot-controller-7d9fbc56b8-wvh2l
	30d73d85a386c       8c217da6734db                                                                                                                                13 minutes ago      Exited              patch                                    1                   63ec050554699       ingress-nginx-admission-patch-tp6tp
	4182ff3d1e473       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24                   13 minutes ago      Exited              create                                   0                   f519da4bfec27       ingress-nginx-admission-create-s6nvq
	220ba84adaccb       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:66fdf18cc8a577423b2a36b96a5be40fe690fdb986bfe7875f54edfa9c7d19a5                            13 minutes ago      Running             gadget                                   0                   95e2903b29637       gadget-xvvvf
	48adb1b2452be       kicbase/minikube-ingress-dns@sha256:d7c3fd25a0ea8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89                                         14 minutes ago      Running             minikube-ingress-dns                     0                   3ce8cc04a57f5       kube-ingress-dns-minikube
	388ea771a1c89       6e38f40d628db                                                                                                                                14 minutes ago      Running             storage-provisioner                      0                   a451536f2a3ae       storage-provisioner
	ef7f4d809a410       rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                                               14 minutes ago      Running             amd-gpu-device-plugin                    0                   efbec0257280a       amd-gpu-device-plugin-7jx7f
	5629c377b6053       52546a367cc9e                                                                                                                                14 minutes ago      Running             coredns                                  0                   b6c342cfbd0e9       coredns-66bc5c9577-fkh52
	cf32cea215063       df0860106674d                                                                                                                                14 minutes ago      Running             kube-proxy                               0                   164bb1f35fdbf       kube-proxy-d9fnb
	1b712309a5901       46169d968e920                                                                                                                                15 minutes ago      Running             kube-scheduler                           0                   16368e958b541       kube-scheduler-addons-214441
	5df8c088591fb       5f1f5298c888d                                                                                                                                15 minutes ago      Running             etcd                                     0                   0a4ad14786721       etcd-addons-214441
	b5368f01fa760       90550c43ad2bc                                                                                                                                15 minutes ago      Running             kube-apiserver                           0                   47b3b468b3308       kube-apiserver-addons-214441
	b7a56dc83eb1d       a0af72f2ec6d6                                                                                                                                15 minutes ago      Running             kube-controller-manager                  0                   8a7efdf44079d       kube-controller-manager-addons-214441
	
	
	==> controller_ingress [9b5cb54a94a4] <==
	I0929 11:32:45.021197       7 controller.go:240] "Initial sync, sleeping for 1 second"
	I0929 11:32:45.021384       7 event.go:377] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-9cc49f96f-h99dj", UID:"2bde8bfa-47f0-48da-9e63-cd8e2a0a38c6", APIVersion:"v1", ResourceVersion:"784", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	I0929 11:32:45.037639       7 status.go:224] "POD is not ready" pod="ingress-nginx/ingress-nginx-controller-9cc49f96f-h99dj" node="addons-214441"
	W0929 11:39:51.373839       7 controller.go:1126] Error obtaining Endpoints for Service "default/nginx": no object matching key "default/nginx" in local store
	I0929 11:39:51.377315       7 main.go:107] "successfully validated configuration, accepting" ingress="default/nginx-ingress"
	I0929 11:39:51.383910       7 store.go:443] "Found valid IngressClass" ingress="default/nginx-ingress" ingressclass="nginx"
	W0929 11:39:51.384731       7 controller.go:1126] Error obtaining Endpoints for Service "default/nginx": no object matching key "default/nginx" in local store
	I0929 11:39:51.386972       7 controller.go:214] "Configuration changes detected, backend reload required"
	I0929 11:39:51.388223       7 event.go:377] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"nginx-ingress", UID:"6c60e7a0-fa15-408e-810a-a4af1c88fe08", APIVersion:"networking.k8s.io/v1", ResourceVersion:"2366", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync
	I0929 11:39:51.444940       7 controller.go:228] "Backend successfully reloaded"
	I0929 11:39:51.450504       7 event.go:377] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-9cc49f96f-h99dj", UID:"2bde8bfa-47f0-48da-9e63-cd8e2a0a38c6", APIVersion:"v1", ResourceVersion:"784", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	W0929 11:39:54.719235       7 controller.go:1232] Service "default/nginx" does not have any active Endpoint.
	I0929 11:39:54.719924       7 controller.go:214] "Configuration changes detected, backend reload required"
	I0929 11:39:54.771503       7 controller.go:228] "Backend successfully reloaded"
	I0929 11:39:54.772049       7 event.go:377] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-9cc49f96f-h99dj", UID:"2bde8bfa-47f0-48da-9e63-cd8e2a0a38c6", APIVersion:"v1", ResourceVersion:"784", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	W0929 11:39:58.057011       7 controller.go:1232] Service "default/nginx" does not have any active Endpoint.
	W0929 11:40:01.385065       7 controller.go:1232] Service "default/nginx" does not have any active Endpoint.
	W0929 11:40:04.718802       7 controller.go:1232] Service "default/nginx" does not have any active Endpoint.
	W0929 11:40:08.052750       7 controller.go:1232] Service "default/nginx" does not have any active Endpoint.
	W0929 11:40:11.385651       7 controller.go:1232] Service "default/nginx" does not have any active Endpoint.
	I0929 11:40:44.966647       7 status.go:309] "updating Ingress status" namespace="default" ingress="nginx-ingress" currentValue=null newValue=[{"ip":"192.168.39.76"}]
	I0929 11:40:44.973434       7 event.go:377] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"nginx-ingress", UID:"6c60e7a0-fa15-408e-810a-a4af1c88fe08", APIVersion:"networking.k8s.io/v1", ResourceVersion:"2672", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync
	W0929 11:40:44.974230       7 controller.go:1232] Service "default/nginx" does not have any active Endpoint.
	W0929 11:42:12.884706       7 controller.go:1232] Service "default/nginx" does not have any active Endpoint.
	W0929 11:42:23.602348       7 controller.go:1232] Service "default/nginx" does not have any active Endpoint.
	
	
	==> coredns [5629c377b605] <==
	[INFO] 10.244.0.7:52212 - 14403 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.001145753s
	[INFO] 10.244.0.7:52212 - 34526 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.001027976s
	[INFO] 10.244.0.7:52212 - 40091 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.002958291s
	[INFO] 10.244.0.7:52212 - 8101 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000112715s
	[INFO] 10.244.0.7:52212 - 55833 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000201304s
	[INFO] 10.244.0.7:52212 - 46374 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000813986s
	[INFO] 10.244.0.7:52212 - 13461 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.00014644s
	[INFO] 10.244.0.7:58134 - 57276 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000168682s
	[INFO] 10.244.0.7:58134 - 56902 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000087725s
	[INFO] 10.244.0.7:45806 - 23713 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000124662s
	[INFO] 10.244.0.7:45806 - 23950 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000142715s
	[INFO] 10.244.0.7:42777 - 55128 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000080735s
	[INFO] 10.244.0.7:42777 - 54892 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000216294s
	[INFO] 10.244.0.7:36398 - 14124 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000321419s
	[INFO] 10.244.0.7:36398 - 13929 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000550817s
	[INFO] 10.244.0.26:41550 - 7840 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00065483s
	[INFO] 10.244.0.26:48585 - 52888 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000202217s
	[INFO] 10.244.0.26:53114 - 55168 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000190191s
	[INFO] 10.244.0.26:47096 - 26187 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000662248s
	[INFO] 10.244.0.26:48999 - 38178 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00015298s
	[INFO] 10.244.0.26:58286 - 39587 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000285241s
	[INFO] 10.244.0.26:45238 - 61249 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.003642198s
	[INFO] 10.244.0.26:33573 - 52185 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 534 0.003922074s
	[INFO] 10.244.0.30:45249 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.002086838s
	[INFO] 10.244.0.30:35918 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000164605s
	
	
	==> describe nodes <==
	Name:               addons-214441
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-214441
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b8c533eb3dce8338d3d4e7231ea97d1c44ed6f81
	                    minikube.k8s.io/name=addons-214441
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_29T11_31_12_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-214441
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-214441"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Sep 2025 11:31:09 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-214441
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Sep 2025 11:46:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Sep 2025 11:40:14 +0000   Mon, 29 Sep 2025 11:31:06 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Sep 2025 11:40:14 +0000   Mon, 29 Sep 2025 11:31:06 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Sep 2025 11:40:14 +0000   Mon, 29 Sep 2025 11:31:06 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Sep 2025 11:40:14 +0000   Mon, 29 Sep 2025 11:31:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.76
	  Hostname:    addons-214441
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008588Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008588Ki
	  pods:               110
	System Info:
	  Machine ID:                 44179717398847cdb8d861dffe58e059
	  System UUID:                44179717-3988-47cd-b8d8-61dffe58e059
	  Boot ID:                    f083535d-5807-413a-9a6b-1a0bbe2d4432
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://28.4.0
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (19 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m36s
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m18s
	  default                     task-pv-pod                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m1s
	  gadget                      gadget-xvvvf                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  ingress-nginx               ingress-nginx-controller-9cc49f96f-h99dj    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         14m
	  kube-system                 amd-gpu-device-plugin-7jx7f                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 coredns-66bc5c9577-fkh52                    100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     14m
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 csi-hostpathplugin-8279f                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 etcd-addons-214441                          100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         14m
	  kube-system                 kube-apiserver-addons-214441                250m (12%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-controller-manager-addons-214441       200m (10%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-proxy-d9fnb                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-scheduler-addons-214441                100m (5%)     0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 snapshot-controller-7d9fbc56b8-pw4g9        0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 snapshot-controller-7d9fbc56b8-wvh2l        0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             260Mi (6%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 14m   kube-proxy       
	  Normal  Starting                 14m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  14m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  14m   kubelet          Node addons-214441 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m   kubelet          Node addons-214441 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m   kubelet          Node addons-214441 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           14m   node-controller  Node addons-214441 event: Registered Node addons-214441 in Controller
	  Normal  NodeReady                14m   kubelet          Node addons-214441 status is now: NodeReady
	
	
	==> dmesg <==
	[ +13.445646] kauditd_printk_skb: 68 callbacks suppressed
	[  +5.142447] kauditd_printk_skb: 20 callbacks suppressed
	[Sep29 11:32] kauditd_printk_skb: 41 callbacks suppressed
	[  +6.199632] kauditd_printk_skb: 38 callbacks suppressed
	[  +1.030429] kauditd_printk_skb: 78 callbacks suppressed
	[  +5.195773] kauditd_printk_skb: 75 callbacks suppressed
	[  +5.274224] kauditd_printk_skb: 150 callbacks suppressed
	[  +5.780886] kauditd_printk_skb: 68 callbacks suppressed
	[  +8.295767] kauditd_printk_skb: 56 callbacks suppressed
	[Sep29 11:39] kauditd_printk_skb: 2 callbacks suppressed
	[ +10.045350] kauditd_printk_skb: 59 callbacks suppressed
	[ +11.893143] kauditd_printk_skb: 41 callbacks suppressed
	[  +5.745446] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.704785] kauditd_printk_skb: 81 callbacks suppressed
	[Sep29 11:40] kauditd_printk_skb: 79 callbacks suppressed
	[  +2.308317] kauditd_printk_skb: 113 callbacks suppressed
	[  +1.203541] kauditd_printk_skb: 47 callbacks suppressed
	[Sep29 11:42] kauditd_printk_skb: 27 callbacks suppressed
	[  +9.517499] kauditd_printk_skb: 40 callbacks suppressed
	[  +5.729582] kauditd_printk_skb: 22 callbacks suppressed
	[Sep29 11:44] kauditd_printk_skb: 26 callbacks suppressed
	[Sep29 11:45] kauditd_printk_skb: 9 callbacks suppressed
	[ +20.688994] kauditd_printk_skb: 26 callbacks suppressed
	[ +25.065246] kauditd_printk_skb: 9 callbacks suppressed
	[  +6.842485] kauditd_printk_skb: 9 callbacks suppressed
	
	
	==> etcd [5df8c088591f] <==
	{"level":"warn","ts":"2025-09-29T11:32:00.549775Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"133.256178ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-29T11:32:00.549795Z","caller":"traceutil/trace.go:172","msg":"trace[872905781] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1062; }","duration":"133.278789ms","start":"2025-09-29T11:32:00.416510Z","end":"2025-09-29T11:32:00.549789Z","steps":["trace[872905781] 'agreement among raft nodes before linearized reading'  (duration: 133.240765ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-29T11:32:22.619881Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"283.951682ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-29T11:32:22.619953Z","caller":"traceutil/trace.go:172","msg":"trace[256565612] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1138; }","duration":"284.054314ms","start":"2025-09-29T11:32:22.335884Z","end":"2025-09-29T11:32:22.619939Z","steps":["trace[256565612] 'range keys from in-memory index tree'  (duration: 283.898213ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-29T11:32:22.620417Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"203.038923ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-29T11:32:22.620455Z","caller":"traceutil/trace.go:172","msg":"trace[2141218366] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1138; }","duration":"203.079865ms","start":"2025-09-29T11:32:22.417365Z","end":"2025-09-29T11:32:22.620444Z","steps":["trace[2141218366] 'range keys from in-memory index tree'  (duration: 202.851561ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-29T11:32:37.446139Z","caller":"traceutil/trace.go:172","msg":"trace[1518739598] linearizableReadLoop","detail":"{readStateIndex:1281; appliedIndex:1281; }","duration":"111.376689ms","start":"2025-09-29T11:32:37.334743Z","end":"2025-09-29T11:32:37.446120Z","steps":["trace[1518739598] 'read index received'  (duration: 111.370356ms)","trace[1518739598] 'applied index is now lower than readState.Index'  (duration: 5.449µs)"],"step_count":2}
	{"level":"warn","ts":"2025-09-29T11:32:37.446365Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"111.596508ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-29T11:32:37.446409Z","caller":"traceutil/trace.go:172","msg":"trace[333303529] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1250; }","duration":"111.664223ms","start":"2025-09-29T11:32:37.334737Z","end":"2025-09-29T11:32:37.446401Z","steps":["trace[333303529] 'agreement among raft nodes before linearized reading'  (duration: 111.566754ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-29T11:32:37.447956Z","caller":"traceutil/trace.go:172","msg":"trace[1818807407] transaction","detail":"{read_only:false; response_revision:1251; number_of_response:1; }","duration":"216.083326ms","start":"2025-09-29T11:32:37.231864Z","end":"2025-09-29T11:32:37.447947Z","steps":["trace[1818807407] 'process raft request'  (duration: 214.333833ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-29T11:32:41.490882Z","caller":"traceutil/trace.go:172","msg":"trace[1943079177] linearizableReadLoop","detail":"{readStateIndex:1295; appliedIndex:1295; }","duration":"156.252408ms","start":"2025-09-29T11:32:41.334599Z","end":"2025-09-29T11:32:41.490852Z","steps":["trace[1943079177] 'read index received'  (duration: 156.245254ms)","trace[1943079177] 'applied index is now lower than readState.Index'  (duration: 4.49µs)"],"step_count":2}
	{"level":"warn","ts":"2025-09-29T11:32:41.491088Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"156.469181ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-29T11:32:41.491110Z","caller":"traceutil/trace.go:172","msg":"trace[366978766] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1264; }","duration":"156.509563ms","start":"2025-09-29T11:32:41.334595Z","end":"2025-09-29T11:32:41.491105Z","steps":["trace[366978766] 'agreement among raft nodes before linearized reading'  (duration: 156.436502ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-29T11:32:41.491567Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-09-29T11:32:41.150207Z","time spent":"341.358415ms","remote":"127.0.0.1:41482","response type":"/etcdserverpb.Lease/LeaseGrant","request count":-1,"request size":-1,"response count":-1,"response size":-1,"request content":""}
	{"level":"info","ts":"2025-09-29T11:39:57.948345Z","caller":"traceutil/trace.go:172","msg":"trace[1591406496] linearizableReadLoop","detail":"{readStateIndex:2551; appliedIndex:2551; }","duration":"124.72426ms","start":"2025-09-29T11:39:57.823478Z","end":"2025-09-29T11:39:57.948202Z","steps":["trace[1591406496] 'read index received'  (duration: 124.71863ms)","trace[1591406496] 'applied index is now lower than readState.Index'  (duration: 4.802µs)"],"step_count":2}
	{"level":"warn","ts":"2025-09-29T11:39:57.948549Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"125.025613ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-29T11:39:57.948597Z","caller":"traceutil/trace.go:172","msg":"trace[612703964] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:2421; }","duration":"125.116152ms","start":"2025-09-29T11:39:57.823474Z","end":"2025-09-29T11:39:57.948590Z","steps":["trace[612703964] 'agreement among raft nodes before linearized reading'  (duration: 124.997233ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-29T11:39:57.949437Z","caller":"traceutil/trace.go:172","msg":"trace[1306847484] transaction","detail":"{read_only:false; number_of_response:1; response_revision:2422; }","duration":"296.693601ms","start":"2025-09-29T11:39:57.652733Z","end":"2025-09-29T11:39:57.949427Z","steps":["trace[1306847484] 'process raft request'  (duration: 296.121623ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-29T11:39:58.302377Z","caller":"traceutil/trace.go:172","msg":"trace[126438438] transaction","detail":"{read_only:false; response_revision:2433; number_of_response:1; }","duration":"116.690338ms","start":"2025-09-29T11:39:58.185669Z","end":"2025-09-29T11:39:58.302359Z","steps":["trace[126438438] 'process raft request'  (duration: 107.946386ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-29T11:41:07.514630Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1800}
	{"level":"info","ts":"2025-09-29T11:41:07.635361Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1800,"took":"119.419717ms","hash":3783191704,"current-db-size-bytes":8732672,"current-db-size":"8.7 MB","current-db-size-in-use-bytes":5963776,"current-db-size-in-use":"6.0 MB"}
	{"level":"info","ts":"2025-09-29T11:41:07.635428Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":3783191704,"revision":1800,"compact-revision":-1}
	{"level":"info","ts":"2025-09-29T11:46:07.523170Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":2728}
	{"level":"info","ts":"2025-09-29T11:46:07.550978Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":2728,"took":"26.544612ms","hash":3628222510,"current-db-size-bytes":8732672,"current-db-size":"8.7 MB","current-db-size-in-use-bytes":4538368,"current-db-size-in-use":"4.5 MB"}
	{"level":"info","ts":"2025-09-29T11:46:07.551024Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":3628222510,"revision":2728,"compact-revision":1800}
	
	
	==> kernel <==
	 11:46:09 up 15 min,  0 users,  load average: 0.09, 0.46, 0.56
	Linux addons-214441 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Sep 18 15:48:18 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [b5368f01fa76] <==
	W0929 11:39:24.460545       1 cacher.go:182] Terminating all watchers from cacher jobs.batch.volcano.sh
	W0929 11:39:24.467415       1 cacher.go:182] Terminating all watchers from cacher podgroups.scheduling.volcano.sh
	W0929 11:39:24.500846       1 cacher.go:182] Terminating all watchers from cacher queues.scheduling.volcano.sh
	W0929 11:39:24.516151       1 cacher.go:182] Terminating all watchers from cacher hypernodes.topology.volcano.sh
	W0929 11:39:24.580645       1 cacher.go:182] Terminating all watchers from cacher numatopologies.nodeinfo.volcano.sh
	W0929 11:39:25.117972       1 cacher.go:182] Terminating all watchers from cacher jobflows.flow.volcano.sh
	W0929 11:39:25.322421       1 cacher.go:182] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
	E0929 11:39:42.471472       1 conn.go:339] Error on socket receive: read tcp 192.168.39.76:8443->192.168.39.1:44978: use of closed network connection
	E0929 11:39:42.758211       1 conn.go:339] Error on socket receive: read tcp 192.168.39.76:8443->192.168.39.1:45000: use of closed network connection
	I0929 11:39:45.674152       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:39:51.379831       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I0929 11:39:51.635969       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.106.133.174"}
	I0929 11:39:52.039060       1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.99.167.87"}
	I0929 11:40:21.576337       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0929 11:40:21.997121       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:41:04.368312       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:41:09.156786       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0929 11:41:32.070520       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:42:20.474077       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:42:56.312150       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:43:33.051574       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:44:06.773562       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:44:43.393063       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:45:30.439510       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:45:44.970907       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	
	
	==> kube-controller-manager [b7a56dc83eb1] <==
	E0929 11:45:14.464025       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 11:45:15.196789       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 11:45:15.198557       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 11:45:16.196460       1 pv_controller.go:1587] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"local-path\" not found" logger="persistentvolume-binder-controller" PVC="default/test-pvc"
	E0929 11:45:27.186119       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 11:45:27.187746       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 11:45:30.078949       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 11:45:30.080558       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 11:45:30.545829       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 11:45:30.547315       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 11:45:31.197071       1 pv_controller.go:1587] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"local-path\" not found" logger="persistentvolume-binder-controller" PVC="default/test-pvc"
	E0929 11:45:32.702330       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 11:45:32.703610       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 11:45:42.962108       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 11:45:42.964179       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 11:45:46.197036       1 pv_controller.go:1587] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"local-path\" not found" logger="persistentvolume-binder-controller" PVC="default/test-pvc"
	E0929 11:45:48.918875       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 11:45:48.920690       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 11:45:55.978554       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 11:45:55.979712       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 11:46:01.197873       1 pv_controller.go:1587] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"local-path\" not found" logger="persistentvolume-binder-controller" PVC="default/test-pvc"
	E0929 11:46:06.334819       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 11:46:06.336337       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 11:46:08.646985       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 11:46:08.649914       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	
	
	==> kube-proxy [cf32cea21506] <==
	I0929 11:31:18.966107       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0929 11:31:19.067553       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0929 11:31:19.067585       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.76"]
	E0929 11:31:19.067663       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0929 11:31:19.367843       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I0929 11:31:19.367925       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0929 11:31:19.367957       1 server_linux.go:132] "Using iptables Proxier"
	I0929 11:31:19.410838       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0929 11:31:19.411105       1 server.go:527] "Version info" version="v1.34.0"
	I0929 11:31:19.411117       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 11:31:19.438109       1 config.go:200] "Starting service config controller"
	I0929 11:31:19.438145       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0929 11:31:19.438165       1 config.go:106] "Starting endpoint slice config controller"
	I0929 11:31:19.438169       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0929 11:31:19.438197       1 config.go:403] "Starting serviceCIDR config controller"
	I0929 11:31:19.438201       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0929 11:31:19.443612       1 config.go:309] "Starting node config controller"
	I0929 11:31:19.443644       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0929 11:31:19.443650       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0929 11:31:19.552512       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0929 11:31:19.552650       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0929 11:31:19.639397       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [1b712309a590] <==
	E0929 11:31:09.221196       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0929 11:31:09.221236       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0929 11:31:09.222033       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E0929 11:31:09.225006       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0929 11:31:09.225514       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E0929 11:31:09.225802       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0929 11:31:09.225865       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0929 11:31:09.225922       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0929 11:31:09.226012       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0929 11:31:09.226045       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0929 11:31:10.048406       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0929 11:31:10.133629       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0929 11:31:10.190360       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0929 11:31:10.277104       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0929 11:31:10.293798       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0929 11:31:10.302970       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0929 11:31:10.326331       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0929 11:31:10.346485       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E0929 11:31:10.373940       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0929 11:31:10.450205       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0929 11:31:10.476705       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0929 11:31:10.548049       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0929 11:31:10.584420       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0929 11:31:10.696768       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I0929 11:31:12.791660       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 29 11:45:25 addons-214441 kubelet[2504]: I0929 11:45:25.045799    2504 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Sep 29 11:45:25 addons-214441 kubelet[2504]: E0929 11:45:25.046445    2504 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="aff7bf59-352b-45d6-9449-f442a6b25e27"
	Sep 29 11:45:34 addons-214441 kubelet[2504]: E0929 11:45:34.183027    2504 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:alpine"
	Sep 29 11:45:34 addons-214441 kubelet[2504]: E0929 11:45:34.183144    2504 kuberuntime_image.go:43] "Failed to pull image" err="Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:alpine"
	Sep 29 11:45:34 addons-214441 kubelet[2504]: E0929 11:45:34.183241    2504 kuberuntime_manager.go:1449] "Unhandled Error" err="container nginx start failed in pod nginx_default(182f1b86-e027-4d79-a5a9-272a05688c3b): ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Sep 29 11:45:34 addons-214441 kubelet[2504]: E0929 11:45:34.183327    2504 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ErrImagePull: \"Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="182f1b86-e027-4d79-a5a9-272a05688c3b"
	Sep 29 11:45:36 addons-214441 kubelet[2504]: E0929 11:45:36.046958    2504 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="aff7bf59-352b-45d6-9449-f442a6b25e27"
	Sep 29 11:45:46 addons-214441 kubelet[2504]: I0929 11:45:46.328763    2504 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/704604b0-d02a-4f45-9445-f0741ba7333b-config-volume\") pod \"704604b0-d02a-4f45-9445-f0741ba7333b\" (UID: \"704604b0-d02a-4f45-9445-f0741ba7333b\") "
	Sep 29 11:45:46 addons-214441 kubelet[2504]: I0929 11:45:46.328822    2504 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-779wl\" (UniqueName: \"kubernetes.io/projected/704604b0-d02a-4f45-9445-f0741ba7333b-kube-api-access-779wl\") pod \"704604b0-d02a-4f45-9445-f0741ba7333b\" (UID: \"704604b0-d02a-4f45-9445-f0741ba7333b\") "
	Sep 29 11:45:46 addons-214441 kubelet[2504]: I0929 11:45:46.329748    2504 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/704604b0-d02a-4f45-9445-f0741ba7333b-config-volume" (OuterVolumeSpecName: "config-volume") pod "704604b0-d02a-4f45-9445-f0741ba7333b" (UID: "704604b0-d02a-4f45-9445-f0741ba7333b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue ""
	Sep 29 11:45:46 addons-214441 kubelet[2504]: I0929 11:45:46.333997    2504 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/704604b0-d02a-4f45-9445-f0741ba7333b-kube-api-access-779wl" (OuterVolumeSpecName: "kube-api-access-779wl") pod "704604b0-d02a-4f45-9445-f0741ba7333b" (UID: "704604b0-d02a-4f45-9445-f0741ba7333b"). InnerVolumeSpecName "kube-api-access-779wl". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Sep 29 11:45:46 addons-214441 kubelet[2504]: I0929 11:45:46.430336    2504 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/704604b0-d02a-4f45-9445-f0741ba7333b-config-volume\") on node \"addons-214441\" DevicePath \"\""
	Sep 29 11:45:46 addons-214441 kubelet[2504]: I0929 11:45:46.430393    2504 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-779wl\" (UniqueName: \"kubernetes.io/projected/704604b0-d02a-4f45-9445-f0741ba7333b-kube-api-access-779wl\") on node \"addons-214441\" DevicePath \"\""
	Sep 29 11:45:47 addons-214441 kubelet[2504]: I0929 11:45:47.147310    2504 scope.go:117] "RemoveContainer" containerID="31302c4317135dfadbabf2bf5a114745aba05b03367b93c097aab3ee4dda5549"
	Sep 29 11:45:47 addons-214441 kubelet[2504]: I0929 11:45:47.193963    2504 scope.go:117] "RemoveContainer" containerID="31302c4317135dfadbabf2bf5a114745aba05b03367b93c097aab3ee4dda5549"
	Sep 29 11:45:47 addons-214441 kubelet[2504]: E0929 11:45:47.195749    2504 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 31302c4317135dfadbabf2bf5a114745aba05b03367b93c097aab3ee4dda5549" containerID="31302c4317135dfadbabf2bf5a114745aba05b03367b93c097aab3ee4dda5549"
	Sep 29 11:45:47 addons-214441 kubelet[2504]: I0929 11:45:47.195792    2504 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"31302c4317135dfadbabf2bf5a114745aba05b03367b93c097aab3ee4dda5549"} err="failed to get container status \"31302c4317135dfadbabf2bf5a114745aba05b03367b93c097aab3ee4dda5549\": rpc error: code = Unknown desc = Error response from daemon: No such container: 31302c4317135dfadbabf2bf5a114745aba05b03367b93c097aab3ee4dda5549"
	Sep 29 11:45:48 addons-214441 kubelet[2504]: I0929 11:45:48.062861    2504 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="704604b0-d02a-4f45-9445-f0741ba7333b" path="/var/lib/kubelet/pods/704604b0-d02a-4f45-9445-f0741ba7333b/volumes"
	Sep 29 11:45:48 addons-214441 kubelet[2504]: E0929 11:45:48.174526    2504 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Sep 29 11:45:48 addons-214441 kubelet[2504]: E0929 11:45:48.174585    2504 kuberuntime_image.go:43] "Failed to pull image" err="Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Sep 29 11:45:48 addons-214441 kubelet[2504]: E0929 11:45:48.174660    2504 kuberuntime_manager.go:1449] "Unhandled Error" err="container task-pv-container start failed in pod task-pv-pod_default(aff7bf59-352b-45d6-9449-f442a6b25e27): ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Sep 29 11:45:48 addons-214441 kubelet[2504]: E0929 11:45:48.174689    2504 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ErrImagePull: \"Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="aff7bf59-352b-45d6-9449-f442a6b25e27"
	Sep 29 11:45:49 addons-214441 kubelet[2504]: E0929 11:45:49.050894    2504 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="182f1b86-e027-4d79-a5a9-272a05688c3b"
	Sep 29 11:46:02 addons-214441 kubelet[2504]: E0929 11:46:02.054576    2504 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="aff7bf59-352b-45d6-9449-f442a6b25e27"
	Sep 29 11:46:04 addons-214441 kubelet[2504]: E0929 11:46:04.053442    2504 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="182f1b86-e027-4d79-a5a9-272a05688c3b"
	
	
	==> storage-provisioner [388ea771a1c8] <==
	W0929 11:45:44.979627       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:45:46.984063       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:45:46.990498       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:45:48.994018       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:45:49.000159       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:45:51.004580       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:45:51.013630       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:45:53.018538       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:45:53.026440       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:45:55.030995       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:45:55.042036       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:45:57.046907       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:45:57.057327       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:45:59.060522       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:45:59.066935       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:46:01.070561       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:46:01.079650       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:46:03.084150       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:46:03.091749       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:46:05.095567       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:46:05.104169       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:46:07.108350       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:46:07.114437       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:46:09.118738       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:46:09.127831       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-214441 -n addons-214441
helpers_test.go:269: (dbg) Run:  kubectl --context addons-214441 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: nginx task-pv-pod test-local-path ingress-nginx-admission-create-s6nvq ingress-nginx-admission-patch-tp6tp
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/CSI]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-214441 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-s6nvq ingress-nginx-admission-patch-tp6tp
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-214441 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-s6nvq ingress-nginx-admission-patch-tp6tp: exit status 1 (92.337419ms)

                                                
                                                
-- stdout --
	Name:             nginx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-214441/192.168.39.76
	Start Time:       Mon, 29 Sep 2025 11:39:51 +0000
	Labels:           run=nginx
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.28
	IPs:
	  IP:  10.244.0.28
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-rdmgz (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-rdmgz:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  6m19s                  default-scheduler  Successfully assigned default/nginx to addons-214441
	  Warning  Failed     6m18s                  kubelet            Failed to pull image "docker.io/nginx:alpine": toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    3m25s (x5 over 6m18s)  kubelet            Pulling image "docker.io/nginx:alpine"
	  Warning  Failed     3m25s (x5 over 6m18s)  kubelet            Error: ErrImagePull
	  Warning  Failed     3m25s (x4 over 6m4s)   kubelet            Failed to pull image "docker.io/nginx:alpine": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    77s (x21 over 6m17s)   kubelet            Back-off pulling image "docker.io/nginx:alpine"
	  Warning  Failed     77s (x21 over 6m17s)   kubelet            Error: ImagePullBackOff
	
	
	Name:             task-pv-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-214441/192.168.39.76
	Start Time:       Mon, 29 Sep 2025 11:40:08 +0000
	Labels:           app=task-pv-pod
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.31
	IPs:
	  IP:  10.244.0.31
	Containers:
	  task-pv-container:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           80/TCP (http-server)
	    Host Port:      0/TCP (http-server)
	    State:          Waiting
	      Reason:       ErrImagePull
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /usr/share/nginx/html from task-pv-storage (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-kt6ld (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  task-pv-storage:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  hpvc
	    ReadOnly:   false
	  kube-api-access-kt6ld:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  6m2s                 default-scheduler  Successfully assigned default/task-pv-pod to addons-214441
	  Warning  Failed     5m18s                kubelet            Failed to pull image "docker.io/nginx": toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    3m7s (x5 over 6m2s)  kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     3m7s (x4 over 6m2s)  kubelet            Failed to pull image "docker.io/nginx": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     3m7s (x5 over 6m2s)  kubelet            Error: ErrImagePull
	  Warning  Failed     58s (x20 over 6m1s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    45s (x21 over 6m1s)  kubelet            Back-off pulling image "docker.io/nginx"
	
	
	Name:             test-local-path
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           run=test-local-path
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  busybox:
	    Image:      busybox:stable
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sh
	      -c
	      echo 'local-path-provisioner' > /test/file1
	    Environment:  <none>
	    Mounts:
	      /test from data (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-tffd7 (ro)
	Volumes:
	  data:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  test-pvc
	    ReadOnly:   false
	  kube-api-access-tffd7:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:                      <none>

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-s6nvq" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-tp6tp" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-214441 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-s6nvq ingress-nginx-admission-patch-tp6tp: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-214441 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-214441 addons disable volumesnapshots --alsologtostderr -v=1: (1.011066295s)
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-214441 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-214441 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.961071208s)
--- FAIL: TestAddons/parallel/CSI (373.82s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (345.26s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-214441 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-214441 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Non-zero exit: kubectl --context addons-214441 get pvc test-pvc -o jsonpath={.status.phase} -n default: context deadline exceeded (1.256µs)
helpers_test.go:404: TestAddons/parallel/LocalPath: WARNING: PVC get for "default" "test-pvc" returned: context deadline exceeded
addons_test.go:960: failed waiting for PVC test-pvc: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/LocalPath]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-214441 -n addons-214441
helpers_test.go:252: <<< TestAddons/parallel/LocalPath FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/LocalPath]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-214441 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-214441 logs -n 25: (1.127852795s)
helpers_test.go:260: TestAddons/parallel/LocalPath logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                     │ minikube             │ jenkins │ v1.37.0 │ 29 Sep 25 11:30 UTC │ 29 Sep 25 11:30 UTC │
	│ delete  │ -p download-only-383930                                                                                                                                                                                                                                                                                                                                                                                                                                                   │ download-only-383930 │ jenkins │ v1.37.0 │ 29 Sep 25 11:30 UTC │ 29 Sep 25 11:30 UTC │
	│ start   │ -o=json --download-only -p download-only-221115 --force --alsologtostderr --kubernetes-version=v1.34.0 --container-runtime=docker --driver=kvm2  --auto-update-drivers=false                                                                                                                                                                                                                                                                                              │ download-only-221115 │ jenkins │ v1.37.0 │ 29 Sep 25 11:30 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                     │ minikube             │ jenkins │ v1.37.0 │ 29 Sep 25 11:30 UTC │ 29 Sep 25 11:30 UTC │
	│ delete  │ -p download-only-221115                                                                                                                                                                                                                                                                                                                                                                                                                                                   │ download-only-221115 │ jenkins │ v1.37.0 │ 29 Sep 25 11:30 UTC │ 29 Sep 25 11:30 UTC │
	│ delete  │ -p download-only-383930                                                                                                                                                                                                                                                                                                                                                                                                                                                   │ download-only-383930 │ jenkins │ v1.37.0 │ 29 Sep 25 11:30 UTC │ 29 Sep 25 11:30 UTC │
	│ delete  │ -p download-only-221115                                                                                                                                                                                                                                                                                                                                                                                                                                                   │ download-only-221115 │ jenkins │ v1.37.0 │ 29 Sep 25 11:30 UTC │ 29 Sep 25 11:30 UTC │
	│ start   │ --download-only -p binary-mirror-005122 --alsologtostderr --binary-mirror http://127.0.0.1:35607 --driver=kvm2  --auto-update-drivers=false                                                                                                                                                                                                                                                                                                                               │ binary-mirror-005122 │ jenkins │ v1.37.0 │ 29 Sep 25 11:30 UTC │                     │
	│ delete  │ -p binary-mirror-005122                                                                                                                                                                                                                                                                                                                                                                                                                                                   │ binary-mirror-005122 │ jenkins │ v1.37.0 │ 29 Sep 25 11:30 UTC │ 29 Sep 25 11:30 UTC │
	│ addons  │ disable dashboard -p addons-214441                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-214441        │ jenkins │ v1.37.0 │ 29 Sep 25 11:30 UTC │                     │
	│ addons  │ enable dashboard -p addons-214441                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-214441        │ jenkins │ v1.37.0 │ 29 Sep 25 11:30 UTC │                     │
	│ start   │ -p addons-214441 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --auto-update-drivers=false --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-214441        │ jenkins │ v1.37.0 │ 29 Sep 25 11:30 UTC │ 29 Sep 25 11:33 UTC │
	│ addons  │ addons-214441 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                               │ addons-214441        │ jenkins │ v1.37.0 │ 29 Sep 25 11:39 UTC │ 29 Sep 25 11:39 UTC │
	│ addons  │ addons-214441 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-214441        │ jenkins │ v1.37.0 │ 29 Sep 25 11:39 UTC │ 29 Sep 25 11:39 UTC │
	│ addons  │ enable headlamp -p addons-214441 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                   │ addons-214441        │ jenkins │ v1.37.0 │ 29 Sep 25 11:39 UTC │ 29 Sep 25 11:39 UTC │
	│ addons  │ addons-214441 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-214441        │ jenkins │ v1.37.0 │ 29 Sep 25 11:39 UTC │ 29 Sep 25 11:39 UTC │
	│ addons  │ addons-214441 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-214441        │ jenkins │ v1.37.0 │ 29 Sep 25 11:40 UTC │ 29 Sep 25 11:40 UTC │
	│ addons  │ addons-214441 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-214441        │ jenkins │ v1.37.0 │ 29 Sep 25 11:40 UTC │ 29 Sep 25 11:40 UTC │
	│ ip      │ addons-214441 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                          │ addons-214441        │ jenkins │ v1.37.0 │ 29 Sep 25 11:40 UTC │ 29 Sep 25 11:40 UTC │
	│ addons  │ addons-214441 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-214441        │ jenkins │ v1.37.0 │ 29 Sep 25 11:40 UTC │ 29 Sep 25 11:40 UTC │
	│ addons  │ addons-214441 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-214441        │ jenkins │ v1.37.0 │ 29 Sep 25 11:40 UTC │ 29 Sep 25 11:40 UTC │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-214441                                                                                                                                                                                                                                                                                                                                                                                            │ addons-214441        │ jenkins │ v1.37.0 │ 29 Sep 25 11:40 UTC │ 29 Sep 25 11:40 UTC │
	│ addons  │ addons-214441 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-214441        │ jenkins │ v1.37.0 │ 29 Sep 25 11:40 UTC │ 29 Sep 25 11:40 UTC │
	│ addons  │ addons-214441 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-214441        │ jenkins │ v1.37.0 │ 29 Sep 25 11:42 UTC │ 29 Sep 25 11:42 UTC │
	│ addons  │ addons-214441 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-214441        │ jenkins │ v1.37.0 │ 29 Sep 25 11:42 UTC │ 29 Sep 25 11:42 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/29 11:30:26
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0929 11:30:26.464374  595895 out.go:360] Setting OutFile to fd 1 ...
	I0929 11:30:26.464481  595895 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 11:30:26.464487  595895 out.go:374] Setting ErrFile to fd 2...
	I0929 11:30:26.464493  595895 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 11:30:26.464787  595895 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21654-591397/.minikube/bin
	I0929 11:30:26.465454  595895 out.go:368] Setting JSON to false
	I0929 11:30:26.466447  595895 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":4374,"bootTime":1759141052,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0929 11:30:26.466553  595895 start.go:140] virtualization: kvm guest
	I0929 11:30:26.468688  595895 out.go:179] * [addons-214441] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0929 11:30:26.470181  595895 out.go:179]   - MINIKUBE_LOCATION=21654
	I0929 11:30:26.470220  595895 notify.go:220] Checking for updates...
	I0929 11:30:26.473145  595895 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0929 11:30:26.474634  595895 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21654-591397/kubeconfig
	I0929 11:30:26.475793  595895 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21654-591397/.minikube
	I0929 11:30:26.477353  595895 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0929 11:30:26.478534  595895 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0929 11:30:26.479959  595895 driver.go:421] Setting default libvirt URI to qemu:///system
	I0929 11:30:26.513451  595895 out.go:179] * Using the kvm2 driver based on user configuration
	I0929 11:30:26.514622  595895 start.go:304] selected driver: kvm2
	I0929 11:30:26.514644  595895 start.go:924] validating driver "kvm2" against <nil>
	I0929 11:30:26.514659  595895 start.go:935] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0929 11:30:26.515675  595895 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 11:30:26.515785  595895 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21654-591397/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0929 11:30:26.530531  595895 install.go:163] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I0929 11:30:26.530568  595895 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21654-591397/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0929 11:30:26.545187  595895 install.go:163] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I0929 11:30:26.545244  595895 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0929 11:30:26.545491  595895 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0929 11:30:26.545527  595895 cni.go:84] Creating CNI manager for ""
	I0929 11:30:26.545570  595895 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0929 11:30:26.545579  595895 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0929 11:30:26.545628  595895 start.go:348] cluster config:
	{Name:addons-214441 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-214441 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: Network
Plugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0
s}
	I0929 11:30:26.545714  595895 iso.go:125] acquiring lock: {Name:mk3bf2644aacab696b9f4d566a6d81a30d75b71a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 11:30:26.547400  595895 out.go:179] * Starting "addons-214441" primary control-plane node in "addons-214441" cluster
	I0929 11:30:26.548855  595895 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0929 11:30:26.548909  595895 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21654-591397/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4
	I0929 11:30:26.548918  595895 cache.go:58] Caching tarball of preloaded images
	I0929 11:30:26.549035  595895 preload.go:172] Found /home/jenkins/minikube-integration/21654-591397/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0929 11:30:26.549046  595895 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on docker
	I0929 11:30:26.549389  595895 profile.go:143] Saving config to /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/addons-214441/config.json ...
	I0929 11:30:26.549415  595895 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/addons-214441/config.json: {Name:mka28e9e486990f30eb3eb321797c26d13a435f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 11:30:26.549559  595895 start.go:360] acquireMachinesLock for addons-214441: {Name:mka3370f06ebed6e47b43729e748683065f344f5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0929 11:30:26.549614  595895 start.go:364] duration metric: took 40.43µs to acquireMachinesLock for "addons-214441"
	I0929 11:30:26.549633  595895 start.go:93] Provisioning new machine with config: &{Name:addons-214441 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 Clu
sterName:addons-214441 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizatio
ns:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0929 11:30:26.549681  595895 start.go:125] createHost starting for "" (driver="kvm2")
	I0929 11:30:26.551210  595895 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I0929 11:30:26.551360  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:30:26.551417  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:30:26.564991  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37291
	I0929 11:30:26.565640  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:30:26.566242  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:30:26.566262  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:30:26.566742  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:30:26.566933  595895 main.go:141] libmachine: (addons-214441) Calling .GetMachineName
	I0929 11:30:26.567150  595895 main.go:141] libmachine: (addons-214441) Calling .DriverName
	I0929 11:30:26.567316  595895 start.go:159] libmachine.API.Create for "addons-214441" (driver="kvm2")
	I0929 11:30:26.567351  595895 client.go:168] LocalClient.Create starting
	I0929 11:30:26.567402  595895 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21654-591397/.minikube/certs/ca.pem
	I0929 11:30:26.955780  595895 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21654-591397/.minikube/certs/cert.pem
	I0929 11:30:27.214636  595895 main.go:141] libmachine: Running pre-create checks...
	I0929 11:30:27.214665  595895 main.go:141] libmachine: (addons-214441) Calling .PreCreateCheck
	I0929 11:30:27.215304  595895 main.go:141] libmachine: (addons-214441) Calling .GetConfigRaw
	I0929 11:30:27.215869  595895 main.go:141] libmachine: Creating machine...
	I0929 11:30:27.215887  595895 main.go:141] libmachine: (addons-214441) Calling .Create
	I0929 11:30:27.216119  595895 main.go:141] libmachine: (addons-214441) creating domain...
	I0929 11:30:27.216141  595895 main.go:141] libmachine: (addons-214441) creating network...
	I0929 11:30:27.217698  595895 main.go:141] libmachine: (addons-214441) DBG | found existing default network
	I0929 11:30:27.217987  595895 main.go:141] libmachine: (addons-214441) DBG | <network>
	I0929 11:30:27.218041  595895 main.go:141] libmachine: (addons-214441) DBG |   <name>default</name>
	I0929 11:30:27.218077  595895 main.go:141] libmachine: (addons-214441) DBG |   <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	I0929 11:30:27.218099  595895 main.go:141] libmachine: (addons-214441) DBG |   <forward mode='nat'>
	I0929 11:30:27.218124  595895 main.go:141] libmachine: (addons-214441) DBG |     <nat>
	I0929 11:30:27.218134  595895 main.go:141] libmachine: (addons-214441) DBG |       <port start='1024' end='65535'/>
	I0929 11:30:27.218144  595895 main.go:141] libmachine: (addons-214441) DBG |     </nat>
	I0929 11:30:27.218151  595895 main.go:141] libmachine: (addons-214441) DBG |   </forward>
	I0929 11:30:27.218161  595895 main.go:141] libmachine: (addons-214441) DBG |   <bridge name='virbr0' stp='on' delay='0'/>
	I0929 11:30:27.218190  595895 main.go:141] libmachine: (addons-214441) DBG |   <mac address='52:54:00:10:a2:1d'/>
	I0929 11:30:27.218203  595895 main.go:141] libmachine: (addons-214441) DBG |   <ip address='192.168.122.1' netmask='255.255.255.0'>
	I0929 11:30:27.218212  595895 main.go:141] libmachine: (addons-214441) DBG |     <dhcp>
	I0929 11:30:27.218222  595895 main.go:141] libmachine: (addons-214441) DBG |       <range start='192.168.122.2' end='192.168.122.254'/>
	I0929 11:30:27.218232  595895 main.go:141] libmachine: (addons-214441) DBG |     </dhcp>
	I0929 11:30:27.218245  595895 main.go:141] libmachine: (addons-214441) DBG |   </ip>
	I0929 11:30:27.218256  595895 main.go:141] libmachine: (addons-214441) DBG | </network>
	I0929 11:30:27.218263  595895 main.go:141] libmachine: (addons-214441) DBG | 
	I0929 11:30:27.219018  595895 main.go:141] libmachine: (addons-214441) DBG | I0929 11:30:27.218796  595923 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000200f10}
	I0929 11:30:27.219127  595895 main.go:141] libmachine: (addons-214441) DBG | defining private network:
	I0929 11:30:27.219156  595895 main.go:141] libmachine: (addons-214441) DBG | 
	I0929 11:30:27.219168  595895 main.go:141] libmachine: (addons-214441) DBG | <network>
	I0929 11:30:27.219179  595895 main.go:141] libmachine: (addons-214441) DBG |   <name>mk-addons-214441</name>
	I0929 11:30:27.219187  595895 main.go:141] libmachine: (addons-214441) DBG |   <dns enable='no'/>
	I0929 11:30:27.219194  595895 main.go:141] libmachine: (addons-214441) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0929 11:30:27.219200  595895 main.go:141] libmachine: (addons-214441) DBG |     <dhcp>
	I0929 11:30:27.219208  595895 main.go:141] libmachine: (addons-214441) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0929 11:30:27.219214  595895 main.go:141] libmachine: (addons-214441) DBG |     </dhcp>
	I0929 11:30:27.219218  595895 main.go:141] libmachine: (addons-214441) DBG |   </ip>
	I0929 11:30:27.219224  595895 main.go:141] libmachine: (addons-214441) DBG | </network>
	I0929 11:30:27.219227  595895 main.go:141] libmachine: (addons-214441) DBG | 
	I0929 11:30:27.225021  595895 main.go:141] libmachine: (addons-214441) DBG | creating private network mk-addons-214441 192.168.39.0/24...
	I0929 11:30:27.300287  595895 main.go:141] libmachine: (addons-214441) DBG | private network mk-addons-214441 192.168.39.0/24 created
	I0929 11:30:27.300635  595895 main.go:141] libmachine: (addons-214441) DBG | <network>
	I0929 11:30:27.300651  595895 main.go:141] libmachine: (addons-214441) DBG |   <name>mk-addons-214441</name>
	I0929 11:30:27.300675  595895 main.go:141] libmachine: (addons-214441) setting up store path in /home/jenkins/minikube-integration/21654-591397/.minikube/machines/addons-214441 ...
	I0929 11:30:27.300695  595895 main.go:141] libmachine: (addons-214441) building disk image from file:///home/jenkins/minikube-integration/21654-591397/.minikube/cache/iso/amd64/minikube-v1.37.0-1758198818-20370-amd64.iso
	I0929 11:30:27.300713  595895 main.go:141] libmachine: (addons-214441) DBG |   <uuid>9d6191f7-7df6-4691-bff3-3dbacc8ac925</uuid>
	I0929 11:30:27.300719  595895 main.go:141] libmachine: (addons-214441) DBG |   <bridge name='virbr1' stp='on' delay='0'/>
	I0929 11:30:27.300726  595895 main.go:141] libmachine: (addons-214441) DBG |   <mac address='52:54:00:ff:bc:22'/>
	I0929 11:30:27.300730  595895 main.go:141] libmachine: (addons-214441) DBG |   <dns enable='no'/>
	I0929 11:30:27.300736  595895 main.go:141] libmachine: (addons-214441) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0929 11:30:27.300741  595895 main.go:141] libmachine: (addons-214441) DBG |     <dhcp>
	I0929 11:30:27.300747  595895 main.go:141] libmachine: (addons-214441) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0929 11:30:27.300754  595895 main.go:141] libmachine: (addons-214441) DBG |     </dhcp>
	I0929 11:30:27.300758  595895 main.go:141] libmachine: (addons-214441) DBG |   </ip>
	I0929 11:30:27.300763  595895 main.go:141] libmachine: (addons-214441) DBG | </network>
	I0929 11:30:27.300770  595895 main.go:141] libmachine: (addons-214441) DBG | 
	I0929 11:30:27.300780  595895 main.go:141] libmachine: (addons-214441) DBG | I0929 11:30:27.300615  595923 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/21654-591397/.minikube
	I0929 11:30:27.300970  595895 main.go:141] libmachine: (addons-214441) Downloading /home/jenkins/minikube-integration/21654-591397/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/21654-591397/.minikube/cache/iso/amd64/minikube-v1.37.0-1758198818-20370-amd64.iso...
	I0929 11:30:27.567829  595895 main.go:141] libmachine: (addons-214441) DBG | I0929 11:30:27.567633  595923 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/21654-591397/.minikube/machines/addons-214441/id_rsa...
	I0929 11:30:27.812384  595895 main.go:141] libmachine: (addons-214441) DBG | I0929 11:30:27.812174  595923 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/21654-591397/.minikube/machines/addons-214441/addons-214441.rawdisk...
	I0929 11:30:27.812428  595895 main.go:141] libmachine: (addons-214441) DBG | Writing magic tar header
	I0929 11:30:27.812454  595895 main.go:141] libmachine: (addons-214441) DBG | Writing SSH key tar header
	I0929 11:30:27.812465  595895 main.go:141] libmachine: (addons-214441) DBG | I0929 11:30:27.812330  595923 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/21654-591397/.minikube/machines/addons-214441 ...
	I0929 11:30:27.812480  595895 main.go:141] libmachine: (addons-214441) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21654-591397/.minikube/machines/addons-214441
	I0929 11:30:27.812548  595895 main.go:141] libmachine: (addons-214441) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21654-591397/.minikube/machines
	I0929 11:30:27.812584  595895 main.go:141] libmachine: (addons-214441) setting executable bit set on /home/jenkins/minikube-integration/21654-591397/.minikube/machines/addons-214441 (perms=drwx------)
	I0929 11:30:27.812594  595895 main.go:141] libmachine: (addons-214441) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21654-591397/.minikube
	I0929 11:30:27.812609  595895 main.go:141] libmachine: (addons-214441) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21654-591397
	I0929 11:30:27.812617  595895 main.go:141] libmachine: (addons-214441) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0929 11:30:27.812625  595895 main.go:141] libmachine: (addons-214441) DBG | checking permissions on dir: /home/jenkins
	I0929 11:30:27.812632  595895 main.go:141] libmachine: (addons-214441) DBG | checking permissions on dir: /home
	I0929 11:30:27.812642  595895 main.go:141] libmachine: (addons-214441) DBG | skipping /home - not owner
	I0929 11:30:27.812734  595895 main.go:141] libmachine: (addons-214441) setting executable bit set on /home/jenkins/minikube-integration/21654-591397/.minikube/machines (perms=drwxr-xr-x)
	I0929 11:30:27.812784  595895 main.go:141] libmachine: (addons-214441) setting executable bit set on /home/jenkins/minikube-integration/21654-591397/.minikube (perms=drwxr-xr-x)
	I0929 11:30:27.812829  595895 main.go:141] libmachine: (addons-214441) setting executable bit set on /home/jenkins/minikube-integration/21654-591397 (perms=drwxrwxr-x)
	I0929 11:30:27.812851  595895 main.go:141] libmachine: (addons-214441) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0929 11:30:27.812866  595895 main.go:141] libmachine: (addons-214441) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0929 11:30:27.812895  595895 main.go:141] libmachine: (addons-214441) defining domain...
	I0929 11:30:27.814169  595895 main.go:141] libmachine: (addons-214441) defining domain using XML: 
	I0929 11:30:27.814189  595895 main.go:141] libmachine: (addons-214441) <domain type='kvm'>
	I0929 11:30:27.814197  595895 main.go:141] libmachine: (addons-214441)   <name>addons-214441</name>
	I0929 11:30:27.814204  595895 main.go:141] libmachine: (addons-214441)   <memory unit='MiB'>4096</memory>
	I0929 11:30:27.814211  595895 main.go:141] libmachine: (addons-214441)   <vcpu>2</vcpu>
	I0929 11:30:27.814217  595895 main.go:141] libmachine: (addons-214441)   <features>
	I0929 11:30:27.814224  595895 main.go:141] libmachine: (addons-214441)     <acpi/>
	I0929 11:30:27.814236  595895 main.go:141] libmachine: (addons-214441)     <apic/>
	I0929 11:30:27.814260  595895 main.go:141] libmachine: (addons-214441)     <pae/>
	I0929 11:30:27.814274  595895 main.go:141] libmachine: (addons-214441)   </features>
	I0929 11:30:27.814283  595895 main.go:141] libmachine: (addons-214441)   <cpu mode='host-passthrough'>
	I0929 11:30:27.814290  595895 main.go:141] libmachine: (addons-214441)   </cpu>
	I0929 11:30:27.814300  595895 main.go:141] libmachine: (addons-214441)   <os>
	I0929 11:30:27.814310  595895 main.go:141] libmachine: (addons-214441)     <type>hvm</type>
	I0929 11:30:27.814319  595895 main.go:141] libmachine: (addons-214441)     <boot dev='cdrom'/>
	I0929 11:30:27.814323  595895 main.go:141] libmachine: (addons-214441)     <boot dev='hd'/>
	I0929 11:30:27.814331  595895 main.go:141] libmachine: (addons-214441)     <bootmenu enable='no'/>
	I0929 11:30:27.814337  595895 main.go:141] libmachine: (addons-214441)   </os>
	I0929 11:30:27.814342  595895 main.go:141] libmachine: (addons-214441)   <devices>
	I0929 11:30:27.814352  595895 main.go:141] libmachine: (addons-214441)     <disk type='file' device='cdrom'>
	I0929 11:30:27.814381  595895 main.go:141] libmachine: (addons-214441)       <source file='/home/jenkins/minikube-integration/21654-591397/.minikube/machines/addons-214441/boot2docker.iso'/>
	I0929 11:30:27.814393  595895 main.go:141] libmachine: (addons-214441)       <target dev='hdc' bus='scsi'/>
	I0929 11:30:27.814438  595895 main.go:141] libmachine: (addons-214441)       <readonly/>
	I0929 11:30:27.814469  595895 main.go:141] libmachine: (addons-214441)     </disk>
	I0929 11:30:27.814485  595895 main.go:141] libmachine: (addons-214441)     <disk type='file' device='disk'>
	I0929 11:30:27.814501  595895 main.go:141] libmachine: (addons-214441)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0929 11:30:27.814519  595895 main.go:141] libmachine: (addons-214441)       <source file='/home/jenkins/minikube-integration/21654-591397/.minikube/machines/addons-214441/addons-214441.rawdisk'/>
	I0929 11:30:27.814537  595895 main.go:141] libmachine: (addons-214441)       <target dev='hda' bus='virtio'/>
	I0929 11:30:27.814551  595895 main.go:141] libmachine: (addons-214441)     </disk>
	I0929 11:30:27.814564  595895 main.go:141] libmachine: (addons-214441)     <interface type='network'>
	I0929 11:30:27.814577  595895 main.go:141] libmachine: (addons-214441)       <source network='mk-addons-214441'/>
	I0929 11:30:27.814587  595895 main.go:141] libmachine: (addons-214441)       <model type='virtio'/>
	I0929 11:30:27.814598  595895 main.go:141] libmachine: (addons-214441)     </interface>
	I0929 11:30:27.814608  595895 main.go:141] libmachine: (addons-214441)     <interface type='network'>
	I0929 11:30:27.814616  595895 main.go:141] libmachine: (addons-214441)       <source network='default'/>
	I0929 11:30:27.814644  595895 main.go:141] libmachine: (addons-214441)       <model type='virtio'/>
	I0929 11:30:27.814658  595895 main.go:141] libmachine: (addons-214441)     </interface>
	I0929 11:30:27.814670  595895 main.go:141] libmachine: (addons-214441)     <serial type='pty'>
	I0929 11:30:27.814681  595895 main.go:141] libmachine: (addons-214441)       <target port='0'/>
	I0929 11:30:27.814692  595895 main.go:141] libmachine: (addons-214441)     </serial>
	I0929 11:30:27.814707  595895 main.go:141] libmachine: (addons-214441)     <console type='pty'>
	I0929 11:30:27.814717  595895 main.go:141] libmachine: (addons-214441)       <target type='serial' port='0'/>
	I0929 11:30:27.814725  595895 main.go:141] libmachine: (addons-214441)     </console>
	I0929 11:30:27.814732  595895 main.go:141] libmachine: (addons-214441)     <rng model='virtio'>
	I0929 11:30:27.814741  595895 main.go:141] libmachine: (addons-214441)       <backend model='random'>/dev/random</backend>
	I0929 11:30:27.814750  595895 main.go:141] libmachine: (addons-214441)     </rng>
	I0929 11:30:27.814759  595895 main.go:141] libmachine: (addons-214441)   </devices>
	I0929 11:30:27.814768  595895 main.go:141] libmachine: (addons-214441) </domain>
	I0929 11:30:27.814781  595895 main.go:141] libmachine: (addons-214441) 
	I0929 11:30:27.822484  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:b8:70:d1 in network default
	I0929 11:30:27.823310  595895 main.go:141] libmachine: (addons-214441) starting domain...
	I0929 11:30:27.823336  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:27.823353  595895 main.go:141] libmachine: (addons-214441) ensuring networks are active...
	I0929 11:30:27.824165  595895 main.go:141] libmachine: (addons-214441) Ensuring network default is active
	I0929 11:30:27.824600  595895 main.go:141] libmachine: (addons-214441) Ensuring network mk-addons-214441 is active
	I0929 11:30:27.825327  595895 main.go:141] libmachine: (addons-214441) getting domain XML...
	I0929 11:30:27.826485  595895 main.go:141] libmachine: (addons-214441) DBG | starting domain XML:
	I0929 11:30:27.826497  595895 main.go:141] libmachine: (addons-214441) DBG | <domain type='kvm'>
	I0929 11:30:27.826534  595895 main.go:141] libmachine: (addons-214441) DBG |   <name>addons-214441</name>
	I0929 11:30:27.826556  595895 main.go:141] libmachine: (addons-214441) DBG |   <uuid>44179717-3988-47cd-b8d8-61dffe58e059</uuid>
	I0929 11:30:27.826564  595895 main.go:141] libmachine: (addons-214441) DBG |   <memory unit='KiB'>4194304</memory>
	I0929 11:30:27.826573  595895 main.go:141] libmachine: (addons-214441) DBG |   <currentMemory unit='KiB'>4194304</currentMemory>
	I0929 11:30:27.826583  595895 main.go:141] libmachine: (addons-214441) DBG |   <vcpu placement='static'>2</vcpu>
	I0929 11:30:27.826594  595895 main.go:141] libmachine: (addons-214441) DBG |   <os>
	I0929 11:30:27.826603  595895 main.go:141] libmachine: (addons-214441) DBG |     <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	I0929 11:30:27.826611  595895 main.go:141] libmachine: (addons-214441) DBG |     <boot dev='cdrom'/>
	I0929 11:30:27.826619  595895 main.go:141] libmachine: (addons-214441) DBG |     <boot dev='hd'/>
	I0929 11:30:27.826627  595895 main.go:141] libmachine: (addons-214441) DBG |     <bootmenu enable='no'/>
	I0929 11:30:27.826636  595895 main.go:141] libmachine: (addons-214441) DBG |   </os>
	I0929 11:30:27.826643  595895 main.go:141] libmachine: (addons-214441) DBG |   <features>
	I0929 11:30:27.826652  595895 main.go:141] libmachine: (addons-214441) DBG |     <acpi/>
	I0929 11:30:27.826658  595895 main.go:141] libmachine: (addons-214441) DBG |     <apic/>
	I0929 11:30:27.826666  595895 main.go:141] libmachine: (addons-214441) DBG |     <pae/>
	I0929 11:30:27.826670  595895 main.go:141] libmachine: (addons-214441) DBG |   </features>
	I0929 11:30:27.826676  595895 main.go:141] libmachine: (addons-214441) DBG |   <cpu mode='host-passthrough' check='none' migratable='on'/>
	I0929 11:30:27.826680  595895 main.go:141] libmachine: (addons-214441) DBG |   <clock offset='utc'/>
	I0929 11:30:27.826712  595895 main.go:141] libmachine: (addons-214441) DBG |   <on_poweroff>destroy</on_poweroff>
	I0929 11:30:27.826730  595895 main.go:141] libmachine: (addons-214441) DBG |   <on_reboot>restart</on_reboot>
	I0929 11:30:27.826740  595895 main.go:141] libmachine: (addons-214441) DBG |   <on_crash>destroy</on_crash>
	I0929 11:30:27.826748  595895 main.go:141] libmachine: (addons-214441) DBG |   <devices>
	I0929 11:30:27.826760  595895 main.go:141] libmachine: (addons-214441) DBG |     <emulator>/usr/bin/qemu-system-x86_64</emulator>
	I0929 11:30:27.826771  595895 main.go:141] libmachine: (addons-214441) DBG |     <disk type='file' device='cdrom'>
	I0929 11:30:27.826782  595895 main.go:141] libmachine: (addons-214441) DBG |       <driver name='qemu' type='raw'/>
	I0929 11:30:27.826804  595895 main.go:141] libmachine: (addons-214441) DBG |       <source file='/home/jenkins/minikube-integration/21654-591397/.minikube/machines/addons-214441/boot2docker.iso'/>
	I0929 11:30:27.826817  595895 main.go:141] libmachine: (addons-214441) DBG |       <target dev='hdc' bus='scsi'/>
	I0929 11:30:27.826828  595895 main.go:141] libmachine: (addons-214441) DBG |       <readonly/>
	I0929 11:30:27.826842  595895 main.go:141] libmachine: (addons-214441) DBG |       <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	I0929 11:30:27.826853  595895 main.go:141] libmachine: (addons-214441) DBG |     </disk>
	I0929 11:30:27.826863  595895 main.go:141] libmachine: (addons-214441) DBG |     <disk type='file' device='disk'>
	I0929 11:30:27.826884  595895 main.go:141] libmachine: (addons-214441) DBG |       <driver name='qemu' type='raw' io='threads'/>
	I0929 11:30:27.826906  595895 main.go:141] libmachine: (addons-214441) DBG |       <source file='/home/jenkins/minikube-integration/21654-591397/.minikube/machines/addons-214441/addons-214441.rawdisk'/>
	I0929 11:30:27.826922  595895 main.go:141] libmachine: (addons-214441) DBG |       <target dev='hda' bus='virtio'/>
	I0929 11:30:27.826937  595895 main.go:141] libmachine: (addons-214441) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	I0929 11:30:27.826947  595895 main.go:141] libmachine: (addons-214441) DBG |     </disk>
	I0929 11:30:27.826959  595895 main.go:141] libmachine: (addons-214441) DBG |     <controller type='usb' index='0' model='piix3-uhci'>
	I0929 11:30:27.826972  595895 main.go:141] libmachine: (addons-214441) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	I0929 11:30:27.826984  595895 main.go:141] libmachine: (addons-214441) DBG |     </controller>
	I0929 11:30:27.827000  595895 main.go:141] libmachine: (addons-214441) DBG |     <controller type='pci' index='0' model='pci-root'/>
	I0929 11:30:27.827014  595895 main.go:141] libmachine: (addons-214441) DBG |     <controller type='scsi' index='0' model='lsilogic'>
	I0929 11:30:27.827028  595895 main.go:141] libmachine: (addons-214441) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	I0929 11:30:27.827039  595895 main.go:141] libmachine: (addons-214441) DBG |     </controller>
	I0929 11:30:27.827046  595895 main.go:141] libmachine: (addons-214441) DBG |     <interface type='network'>
	I0929 11:30:27.827053  595895 main.go:141] libmachine: (addons-214441) DBG |       <mac address='52:54:00:98:9c:d8'/>
	I0929 11:30:27.827060  595895 main.go:141] libmachine: (addons-214441) DBG |       <source network='mk-addons-214441'/>
	I0929 11:30:27.827087  595895 main.go:141] libmachine: (addons-214441) DBG |       <model type='virtio'/>
	I0929 11:30:27.827120  595895 main.go:141] libmachine: (addons-214441) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	I0929 11:30:27.827133  595895 main.go:141] libmachine: (addons-214441) DBG |     </interface>
	I0929 11:30:27.827141  595895 main.go:141] libmachine: (addons-214441) DBG |     <interface type='network'>
	I0929 11:30:27.827146  595895 main.go:141] libmachine: (addons-214441) DBG |       <mac address='52:54:00:b8:70:d1'/>
	I0929 11:30:27.827154  595895 main.go:141] libmachine: (addons-214441) DBG |       <source network='default'/>
	I0929 11:30:27.827172  595895 main.go:141] libmachine: (addons-214441) DBG |       <model type='virtio'/>
	I0929 11:30:27.827197  595895 main.go:141] libmachine: (addons-214441) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	I0929 11:30:27.827208  595895 main.go:141] libmachine: (addons-214441) DBG |     </interface>
	I0929 11:30:27.827218  595895 main.go:141] libmachine: (addons-214441) DBG |     <serial type='pty'>
	I0929 11:30:27.827232  595895 main.go:141] libmachine: (addons-214441) DBG |       <target type='isa-serial' port='0'>
	I0929 11:30:27.827252  595895 main.go:141] libmachine: (addons-214441) DBG |         <model name='isa-serial'/>
	I0929 11:30:27.827267  595895 main.go:141] libmachine: (addons-214441) DBG |       </target>
	I0929 11:30:27.827295  595895 main.go:141] libmachine: (addons-214441) DBG |     </serial>
	I0929 11:30:27.827306  595895 main.go:141] libmachine: (addons-214441) DBG |     <console type='pty'>
	I0929 11:30:27.827316  595895 main.go:141] libmachine: (addons-214441) DBG |       <target type='serial' port='0'/>
	I0929 11:30:27.827327  595895 main.go:141] libmachine: (addons-214441) DBG |     </console>
	I0929 11:30:27.827337  595895 main.go:141] libmachine: (addons-214441) DBG |     <input type='mouse' bus='ps2'/>
	I0929 11:30:27.827353  595895 main.go:141] libmachine: (addons-214441) DBG |     <input type='keyboard' bus='ps2'/>
	I0929 11:30:27.827365  595895 main.go:141] libmachine: (addons-214441) DBG |     <audio id='1' type='none'/>
	I0929 11:30:27.827381  595895 main.go:141] libmachine: (addons-214441) DBG |     <memballoon model='virtio'>
	I0929 11:30:27.827396  595895 main.go:141] libmachine: (addons-214441) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	I0929 11:30:27.827407  595895 main.go:141] libmachine: (addons-214441) DBG |     </memballoon>
	I0929 11:30:27.827416  595895 main.go:141] libmachine: (addons-214441) DBG |     <rng model='virtio'>
	I0929 11:30:27.827462  595895 main.go:141] libmachine: (addons-214441) DBG |       <backend model='random'>/dev/random</backend>
	I0929 11:30:27.827477  595895 main.go:141] libmachine: (addons-214441) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	I0929 11:30:27.827484  595895 main.go:141] libmachine: (addons-214441) DBG |     </rng>
	I0929 11:30:27.827492  595895 main.go:141] libmachine: (addons-214441) DBG |   </devices>
	I0929 11:30:27.827507  595895 main.go:141] libmachine: (addons-214441) DBG | </domain>
	I0929 11:30:27.827523  595895 main.go:141] libmachine: (addons-214441) DBG | 
	I0929 11:30:29.153785  595895 main.go:141] libmachine: (addons-214441) waiting for domain to start...
	I0929 11:30:29.155338  595895 main.go:141] libmachine: (addons-214441) domain is now running
	I0929 11:30:29.155366  595895 main.go:141] libmachine: (addons-214441) waiting for IP...
	I0929 11:30:29.156233  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:29.156741  595895 main.go:141] libmachine: (addons-214441) DBG | no network interface addresses found for domain addons-214441 (source=lease)
	I0929 11:30:29.156768  595895 main.go:141] libmachine: (addons-214441) DBG | trying to list again with source=arp
	I0929 11:30:29.157097  595895 main.go:141] libmachine: (addons-214441) DBG | unable to find current IP address of domain addons-214441 in network mk-addons-214441 (interfaces detected: [])
	I0929 11:30:29.157173  595895 main.go:141] libmachine: (addons-214441) DBG | I0929 11:30:29.157084  595923 retry.go:31] will retry after 193.130078ms: waiting for domain to come up
	I0929 11:30:29.351641  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:29.352088  595895 main.go:141] libmachine: (addons-214441) DBG | no network interface addresses found for domain addons-214441 (source=lease)
	I0929 11:30:29.352131  595895 main.go:141] libmachine: (addons-214441) DBG | trying to list again with source=arp
	I0929 11:30:29.352401  595895 main.go:141] libmachine: (addons-214441) DBG | unable to find current IP address of domain addons-214441 in network mk-addons-214441 (interfaces detected: [])
	I0929 11:30:29.352453  595895 main.go:141] libmachine: (addons-214441) DBG | I0929 11:30:29.352389  595923 retry.go:31] will retry after 298.936458ms: waiting for domain to come up
	I0929 11:30:29.653209  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:29.653776  595895 main.go:141] libmachine: (addons-214441) DBG | no network interface addresses found for domain addons-214441 (source=lease)
	I0929 11:30:29.653812  595895 main.go:141] libmachine: (addons-214441) DBG | trying to list again with source=arp
	I0929 11:30:29.654092  595895 main.go:141] libmachine: (addons-214441) DBG | unable to find current IP address of domain addons-214441 in network mk-addons-214441 (interfaces detected: [])
	I0929 11:30:29.654145  595895 main.go:141] libmachine: (addons-214441) DBG | I0929 11:30:29.654057  595923 retry.go:31] will retry after 319.170448ms: waiting for domain to come up
	I0929 11:30:29.974953  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:29.975656  595895 main.go:141] libmachine: (addons-214441) DBG | no network interface addresses found for domain addons-214441 (source=lease)
	I0929 11:30:29.975697  595895 main.go:141] libmachine: (addons-214441) DBG | trying to list again with source=arp
	I0929 11:30:29.976026  595895 main.go:141] libmachine: (addons-214441) DBG | unable to find current IP address of domain addons-214441 in network mk-addons-214441 (interfaces detected: [])
	I0929 11:30:29.976053  595895 main.go:141] libmachine: (addons-214441) DBG | I0929 11:30:29.976008  595923 retry.go:31] will retry after 599.248845ms: waiting for domain to come up
	I0929 11:30:30.576933  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:30.577607  595895 main.go:141] libmachine: (addons-214441) DBG | no network interface addresses found for domain addons-214441 (source=lease)
	I0929 11:30:30.577638  595895 main.go:141] libmachine: (addons-214441) DBG | trying to list again with source=arp
	I0929 11:30:30.577976  595895 main.go:141] libmachine: (addons-214441) DBG | unable to find current IP address of domain addons-214441 in network mk-addons-214441 (interfaces detected: [])
	I0929 11:30:30.578001  595895 main.go:141] libmachine: (addons-214441) DBG | I0929 11:30:30.577944  595923 retry.go:31] will retry after 506.439756ms: waiting for domain to come up
	I0929 11:30:31.085911  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:31.086486  595895 main.go:141] libmachine: (addons-214441) DBG | no network interface addresses found for domain addons-214441 (source=lease)
	I0929 11:30:31.086516  595895 main.go:141] libmachine: (addons-214441) DBG | trying to list again with source=arp
	I0929 11:30:31.086838  595895 main.go:141] libmachine: (addons-214441) DBG | unable to find current IP address of domain addons-214441 in network mk-addons-214441 (interfaces detected: [])
	I0929 11:30:31.086901  595895 main.go:141] libmachine: (addons-214441) DBG | I0929 11:30:31.086827  595923 retry.go:31] will retry after 714.950089ms: waiting for domain to come up
	I0929 11:30:31.803913  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:31.804432  595895 main.go:141] libmachine: (addons-214441) DBG | no network interface addresses found for domain addons-214441 (source=lease)
	I0929 11:30:31.804465  595895 main.go:141] libmachine: (addons-214441) DBG | trying to list again with source=arp
	I0929 11:30:31.804799  595895 main.go:141] libmachine: (addons-214441) DBG | unable to find current IP address of domain addons-214441 in network mk-addons-214441 (interfaces detected: [])
	I0929 11:30:31.804835  595895 main.go:141] libmachine: (addons-214441) DBG | I0929 11:30:31.804762  595923 retry.go:31] will retry after 948.596157ms: waiting for domain to come up
	I0929 11:30:32.755226  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:32.755814  595895 main.go:141] libmachine: (addons-214441) DBG | no network interface addresses found for domain addons-214441 (source=lease)
	I0929 11:30:32.755837  595895 main.go:141] libmachine: (addons-214441) DBG | trying to list again with source=arp
	I0929 11:30:32.756159  595895 main.go:141] libmachine: (addons-214441) DBG | unable to find current IP address of domain addons-214441 in network mk-addons-214441 (interfaces detected: [])
	I0929 11:30:32.756191  595895 main.go:141] libmachine: (addons-214441) DBG | I0929 11:30:32.756135  595923 retry.go:31] will retry after 1.377051804s: waiting for domain to come up
	I0929 11:30:34.136012  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:34.136582  595895 main.go:141] libmachine: (addons-214441) DBG | no network interface addresses found for domain addons-214441 (source=lease)
	I0929 11:30:34.136605  595895 main.go:141] libmachine: (addons-214441) DBG | trying to list again with source=arp
	I0929 11:30:34.136880  595895 main.go:141] libmachine: (addons-214441) DBG | unable to find current IP address of domain addons-214441 in network mk-addons-214441 (interfaces detected: [])
	I0929 11:30:34.136912  595895 main.go:141] libmachine: (addons-214441) DBG | I0929 11:30:34.136849  595923 retry.go:31] will retry after 1.34696154s: waiting for domain to come up
	I0929 11:30:35.485739  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:35.486269  595895 main.go:141] libmachine: (addons-214441) DBG | no network interface addresses found for domain addons-214441 (source=lease)
	I0929 11:30:35.486292  595895 main.go:141] libmachine: (addons-214441) DBG | trying to list again with source=arp
	I0929 11:30:35.486548  595895 main.go:141] libmachine: (addons-214441) DBG | unable to find current IP address of domain addons-214441 in network mk-addons-214441 (interfaces detected: [])
	I0929 11:30:35.486587  595895 main.go:141] libmachine: (addons-214441) DBG | I0929 11:30:35.486521  595923 retry.go:31] will retry after 1.574508192s: waiting for domain to come up
	I0929 11:30:37.063528  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:37.064142  595895 main.go:141] libmachine: (addons-214441) DBG | no network interface addresses found for domain addons-214441 (source=lease)
	I0929 11:30:37.064170  595895 main.go:141] libmachine: (addons-214441) DBG | trying to list again with source=arp
	I0929 11:30:37.064559  595895 main.go:141] libmachine: (addons-214441) DBG | unable to find current IP address of domain addons-214441 in network mk-addons-214441 (interfaces detected: [])
	I0929 11:30:37.064594  595895 main.go:141] libmachine: (addons-214441) DBG | I0929 11:30:37.064489  595923 retry.go:31] will retry after 2.067291223s: waiting for domain to come up
	I0929 11:30:39.135405  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:39.135998  595895 main.go:141] libmachine: (addons-214441) DBG | no network interface addresses found for domain addons-214441 (source=lease)
	I0929 11:30:39.136030  595895 main.go:141] libmachine: (addons-214441) DBG | trying to list again with source=arp
	I0929 11:30:39.136354  595895 main.go:141] libmachine: (addons-214441) DBG | unable to find current IP address of domain addons-214441 in network mk-addons-214441 (interfaces detected: [])
	I0929 11:30:39.136412  595895 main.go:141] libmachine: (addons-214441) DBG | I0929 11:30:39.136338  595923 retry.go:31] will retry after 3.104602856s: waiting for domain to come up
	I0929 11:30:42.242410  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:42.242939  595895 main.go:141] libmachine: (addons-214441) DBG | no network interface addresses found for domain addons-214441 (source=lease)
	I0929 11:30:42.242965  595895 main.go:141] libmachine: (addons-214441) DBG | trying to list again with source=arp
	I0929 11:30:42.243288  595895 main.go:141] libmachine: (addons-214441) DBG | unable to find current IP address of domain addons-214441 in network mk-addons-214441 (interfaces detected: [])
	I0929 11:30:42.243344  595895 main.go:141] libmachine: (addons-214441) DBG | I0929 11:30:42.243280  595923 retry.go:31] will retry after 4.150705767s: waiting for domain to come up
	I0929 11:30:46.398779  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:46.399347  595895 main.go:141] libmachine: (addons-214441) found domain IP: 192.168.39.76
	I0929 11:30:46.399374  595895 main.go:141] libmachine: (addons-214441) reserving static IP address...
	I0929 11:30:46.399388  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has current primary IP address 192.168.39.76 and MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:46.399901  595895 main.go:141] libmachine: (addons-214441) DBG | unable to find host DHCP lease matching {name: "addons-214441", mac: "52:54:00:98:9c:d8", ip: "192.168.39.76"} in network mk-addons-214441
	I0929 11:30:46.587177  595895 main.go:141] libmachine: (addons-214441) DBG | Getting to WaitForSSH function...
	I0929 11:30:46.587215  595895 main.go:141] libmachine: (addons-214441) reserved static IP address 192.168.39.76 for domain addons-214441
	I0929 11:30:46.587228  595895 main.go:141] libmachine: (addons-214441) waiting for SSH...
	I0929 11:30:46.590179  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:46.590588  595895 main.go:141] libmachine: (addons-214441) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:9c:d8", ip: ""} in network mk-addons-214441: {Iface:virbr1 ExpiryTime:2025-09-29 12:30:43 +0000 UTC Type:0 Mac:52:54:00:98:9c:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:minikube Clientid:01:52:54:00:98:9c:d8}
	I0929 11:30:46.590626  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined IP address 192.168.39.76 and MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:46.590750  595895 main.go:141] libmachine: (addons-214441) DBG | Using SSH client type: external
	I0929 11:30:46.590791  595895 main.go:141] libmachine: (addons-214441) DBG | Using SSH private key: /home/jenkins/minikube-integration/21654-591397/.minikube/machines/addons-214441/id_rsa (-rw-------)
	I0929 11:30:46.590840  595895 main.go:141] libmachine: (addons-214441) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.76 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21654-591397/.minikube/machines/addons-214441/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0929 11:30:46.590868  595895 main.go:141] libmachine: (addons-214441) DBG | About to run SSH command:
	I0929 11:30:46.590883  595895 main.go:141] libmachine: (addons-214441) DBG | exit 0
	I0929 11:30:46.729877  595895 main.go:141] libmachine: (addons-214441) DBG | SSH cmd err, output: <nil>: 
	I0929 11:30:46.730171  595895 main.go:141] libmachine: (addons-214441) domain creation complete
	I0929 11:30:46.730534  595895 main.go:141] libmachine: (addons-214441) Calling .GetConfigRaw
	I0929 11:30:46.731196  595895 main.go:141] libmachine: (addons-214441) Calling .DriverName
	I0929 11:30:46.731410  595895 main.go:141] libmachine: (addons-214441) Calling .DriverName
	I0929 11:30:46.731600  595895 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0929 11:30:46.731623  595895 main.go:141] libmachine: (addons-214441) Calling .GetState
	I0929 11:30:46.732882  595895 main.go:141] libmachine: Detecting operating system of created instance...
	I0929 11:30:46.732897  595895 main.go:141] libmachine: Waiting for SSH to be available...
	I0929 11:30:46.732902  595895 main.go:141] libmachine: Getting to WaitForSSH function...
	I0929 11:30:46.732908  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHHostname
	I0929 11:30:46.735685  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:46.736210  595895 main.go:141] libmachine: (addons-214441) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:9c:d8", ip: ""} in network mk-addons-214441: {Iface:virbr1 ExpiryTime:2025-09-29 12:30:43 +0000 UTC Type:0 Mac:52:54:00:98:9c:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:addons-214441 Clientid:01:52:54:00:98:9c:d8}
	I0929 11:30:46.736238  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined IP address 192.168.39.76 and MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:46.736397  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHPort
	I0929 11:30:46.736652  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHKeyPath
	I0929 11:30:46.736854  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHKeyPath
	I0929 11:30:46.736998  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHUsername
	I0929 11:30:46.737156  595895 main.go:141] libmachine: Using SSH client type: native
	I0929 11:30:46.737392  595895 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.39.76 22 <nil> <nil>}
	I0929 11:30:46.737403  595895 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0929 11:30:46.844278  595895 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0929 11:30:46.844312  595895 main.go:141] libmachine: Detecting the provisioner...
	I0929 11:30:46.844324  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHHostname
	I0929 11:30:46.848224  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:46.849228  595895 main.go:141] libmachine: (addons-214441) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:9c:d8", ip: ""} in network mk-addons-214441: {Iface:virbr1 ExpiryTime:2025-09-29 12:30:43 +0000 UTC Type:0 Mac:52:54:00:98:9c:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:addons-214441 Clientid:01:52:54:00:98:9c:d8}
	I0929 11:30:46.849264  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined IP address 192.168.39.76 and MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:46.849457  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHPort
	I0929 11:30:46.849706  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHKeyPath
	I0929 11:30:46.849884  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHKeyPath
	I0929 11:30:46.850038  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHUsername
	I0929 11:30:46.850227  595895 main.go:141] libmachine: Using SSH client type: native
	I0929 11:30:46.850481  595895 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.39.76 22 <nil> <nil>}
	I0929 11:30:46.850494  595895 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0929 11:30:46.959386  595895 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2025.02-dirty
	ID=buildroot
	VERSION_ID=2025.02
	PRETTY_NAME="Buildroot 2025.02"
	
	I0929 11:30:46.959537  595895 main.go:141] libmachine: found compatible host: buildroot
	I0929 11:30:46.959560  595895 main.go:141] libmachine: Provisioning with buildroot...
	I0929 11:30:46.959572  595895 main.go:141] libmachine: (addons-214441) Calling .GetMachineName
	I0929 11:30:46.959897  595895 buildroot.go:166] provisioning hostname "addons-214441"
	I0929 11:30:46.959920  595895 main.go:141] libmachine: (addons-214441) Calling .GetMachineName
	I0929 11:30:46.960158  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHHostname
	I0929 11:30:46.963429  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:46.963851  595895 main.go:141] libmachine: (addons-214441) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:9c:d8", ip: ""} in network mk-addons-214441: {Iface:virbr1 ExpiryTime:2025-09-29 12:30:43 +0000 UTC Type:0 Mac:52:54:00:98:9c:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:addons-214441 Clientid:01:52:54:00:98:9c:d8}
	I0929 11:30:46.963892  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined IP address 192.168.39.76 and MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:46.964187  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHPort
	I0929 11:30:46.964389  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHKeyPath
	I0929 11:30:46.964590  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHKeyPath
	I0929 11:30:46.964750  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHUsername
	I0929 11:30:46.964942  595895 main.go:141] libmachine: Using SSH client type: native
	I0929 11:30:46.965188  595895 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.39.76 22 <nil> <nil>}
	I0929 11:30:46.965202  595895 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-214441 && echo "addons-214441" | sudo tee /etc/hostname
	I0929 11:30:47.092132  595895 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-214441
	
	I0929 11:30:47.092159  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHHostname
	I0929 11:30:47.095605  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:47.096136  595895 main.go:141] libmachine: (addons-214441) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:9c:d8", ip: ""} in network mk-addons-214441: {Iface:virbr1 ExpiryTime:2025-09-29 12:30:43 +0000 UTC Type:0 Mac:52:54:00:98:9c:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:addons-214441 Clientid:01:52:54:00:98:9c:d8}
	I0929 11:30:47.096169  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined IP address 192.168.39.76 and MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:47.096340  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHPort
	I0929 11:30:47.096555  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHKeyPath
	I0929 11:30:47.096747  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHKeyPath
	I0929 11:30:47.096902  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHUsername
	I0929 11:30:47.097123  595895 main.go:141] libmachine: Using SSH client type: native
	I0929 11:30:47.097351  595895 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.39.76 22 <nil> <nil>}
	I0929 11:30:47.097369  595895 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-214441' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-214441/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-214441' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0929 11:30:47.216048  595895 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0929 11:30:47.216081  595895 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21654-591397/.minikube CaCertPath:/home/jenkins/minikube-integration/21654-591397/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21654-591397/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21654-591397/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21654-591397/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21654-591397/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21654-591397/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21654-591397/.minikube}
	I0929 11:30:47.216160  595895 buildroot.go:174] setting up certificates
	I0929 11:30:47.216176  595895 provision.go:84] configureAuth start
	I0929 11:30:47.216187  595895 main.go:141] libmachine: (addons-214441) Calling .GetMachineName
	I0929 11:30:47.216551  595895 main.go:141] libmachine: (addons-214441) Calling .GetIP
	I0929 11:30:47.219822  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:47.220206  595895 main.go:141] libmachine: (addons-214441) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:9c:d8", ip: ""} in network mk-addons-214441: {Iface:virbr1 ExpiryTime:2025-09-29 12:30:43 +0000 UTC Type:0 Mac:52:54:00:98:9c:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:addons-214441 Clientid:01:52:54:00:98:9c:d8}
	I0929 11:30:47.220241  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined IP address 192.168.39.76 and MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:47.220424  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHHostname
	I0929 11:30:47.222925  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:47.223320  595895 main.go:141] libmachine: (addons-214441) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:9c:d8", ip: ""} in network mk-addons-214441: {Iface:virbr1 ExpiryTime:2025-09-29 12:30:43 +0000 UTC Type:0 Mac:52:54:00:98:9c:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:addons-214441 Clientid:01:52:54:00:98:9c:d8}
	I0929 11:30:47.223351  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined IP address 192.168.39.76 and MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:47.223603  595895 provision.go:143] copyHostCerts
	I0929 11:30:47.223674  595895 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21654-591397/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21654-591397/.minikube/cert.pem (1123 bytes)
	I0929 11:30:47.223815  595895 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21654-591397/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21654-591397/.minikube/key.pem (1675 bytes)
	I0929 11:30:47.223908  595895 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21654-591397/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21654-591397/.minikube/ca.pem (1082 bytes)
	I0929 11:30:47.223987  595895 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21654-591397/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21654-591397/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21654-591397/.minikube/certs/ca-key.pem org=jenkins.addons-214441 san=[127.0.0.1 192.168.39.76 addons-214441 localhost minikube]
	I0929 11:30:47.541100  595895 provision.go:177] copyRemoteCerts
	I0929 11:30:47.541199  595895 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0929 11:30:47.541238  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHHostname
	I0929 11:30:47.544486  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:47.544940  595895 main.go:141] libmachine: (addons-214441) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:9c:d8", ip: ""} in network mk-addons-214441: {Iface:virbr1 ExpiryTime:2025-09-29 12:30:43 +0000 UTC Type:0 Mac:52:54:00:98:9c:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:addons-214441 Clientid:01:52:54:00:98:9c:d8}
	I0929 11:30:47.545024  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined IP address 192.168.39.76 and MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:47.545286  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHPort
	I0929 11:30:47.545574  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHKeyPath
	I0929 11:30:47.545766  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHUsername
	I0929 11:30:47.545940  595895 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21654-591397/.minikube/machines/addons-214441/id_rsa Username:docker}
	I0929 11:30:47.632441  595895 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21654-591397/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0929 11:30:47.665928  595895 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21654-591397/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0929 11:30:47.699464  595895 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21654-591397/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0929 11:30:47.731874  595895 provision.go:87] duration metric: took 515.680125ms to configureAuth
	I0929 11:30:47.731904  595895 buildroot.go:189] setting minikube options for container-runtime
	I0929 11:30:47.732120  595895 config.go:182] Loaded profile config "addons-214441": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0929 11:30:47.732187  595895 main.go:141] libmachine: (addons-214441) Calling .DriverName
	I0929 11:30:47.732484  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHHostname
	I0929 11:30:47.735606  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:47.736098  595895 main.go:141] libmachine: (addons-214441) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:9c:d8", ip: ""} in network mk-addons-214441: {Iface:virbr1 ExpiryTime:2025-09-29 12:30:43 +0000 UTC Type:0 Mac:52:54:00:98:9c:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:addons-214441 Clientid:01:52:54:00:98:9c:d8}
	I0929 11:30:47.736147  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined IP address 192.168.39.76 and MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:47.736408  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHPort
	I0929 11:30:47.736676  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHKeyPath
	I0929 11:30:47.736876  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHKeyPath
	I0929 11:30:47.737026  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHUsername
	I0929 11:30:47.737286  595895 main.go:141] libmachine: Using SSH client type: native
	I0929 11:30:47.737503  595895 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.39.76 22 <nil> <nil>}
	I0929 11:30:47.737522  595895 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0929 11:30:47.845243  595895 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0929 11:30:47.845278  595895 buildroot.go:70] root file system type: tmpfs
	I0929 11:30:47.845464  595895 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0929 11:30:47.845493  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHHostname
	I0929 11:30:47.848685  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:47.849080  595895 main.go:141] libmachine: (addons-214441) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:9c:d8", ip: ""} in network mk-addons-214441: {Iface:virbr1 ExpiryTime:2025-09-29 12:30:43 +0000 UTC Type:0 Mac:52:54:00:98:9c:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:addons-214441 Clientid:01:52:54:00:98:9c:d8}
	I0929 11:30:47.849125  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined IP address 192.168.39.76 and MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:47.849333  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHPort
	I0929 11:30:47.849561  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHKeyPath
	I0929 11:30:47.849749  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHKeyPath
	I0929 11:30:47.849921  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHUsername
	I0929 11:30:47.850156  595895 main.go:141] libmachine: Using SSH client type: native
	I0929 11:30:47.850438  595895 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.39.76 22 <nil> <nil>}
	I0929 11:30:47.850513  595895 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target minikube-automount.service nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	[Service]
	Type=notify
	Restart=always
	
	
	
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0929 11:30:47.980841  595895 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target minikube-automount.service nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	[Service]
	Type=notify
	Restart=always
	
	
	
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I0929 11:30:47.980885  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHHostname
	I0929 11:30:47.984021  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:47.984467  595895 main.go:141] libmachine: (addons-214441) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:9c:d8", ip: ""} in network mk-addons-214441: {Iface:virbr1 ExpiryTime:2025-09-29 12:30:43 +0000 UTC Type:0 Mac:52:54:00:98:9c:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:addons-214441 Clientid:01:52:54:00:98:9c:d8}
	I0929 11:30:47.984505  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined IP address 192.168.39.76 and MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:47.984746  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHPort
	I0929 11:30:47.984964  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHKeyPath
	I0929 11:30:47.985145  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHKeyPath
	I0929 11:30:47.985345  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHUsername
	I0929 11:30:47.985533  595895 main.go:141] libmachine: Using SSH client type: native
	I0929 11:30:47.985753  595895 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.39.76 22 <nil> <nil>}
	I0929 11:30:47.985769  595895 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0929 11:30:48.944806  595895 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink '/etc/systemd/system/multi-user.target.wants/docker.service' → '/usr/lib/systemd/system/docker.service'.
	
	I0929 11:30:48.944837  595895 main.go:141] libmachine: Checking connection to Docker...
	I0929 11:30:48.944847  595895 main.go:141] libmachine: (addons-214441) Calling .GetURL
	I0929 11:30:48.946423  595895 main.go:141] libmachine: (addons-214441) DBG | using libvirt version 8000000
	I0929 11:30:48.949334  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:48.949705  595895 main.go:141] libmachine: (addons-214441) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:9c:d8", ip: ""} in network mk-addons-214441: {Iface:virbr1 ExpiryTime:2025-09-29 12:30:43 +0000 UTC Type:0 Mac:52:54:00:98:9c:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:addons-214441 Clientid:01:52:54:00:98:9c:d8}
	I0929 11:30:48.949727  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined IP address 192.168.39.76 and MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:48.949905  595895 main.go:141] libmachine: Docker is up and running!
	I0929 11:30:48.949918  595895 main.go:141] libmachine: Reticulating splines...
	I0929 11:30:48.949926  595895 client.go:171] duration metric: took 22.382562322s to LocalClient.Create
	I0929 11:30:48.949961  595895 start.go:167] duration metric: took 22.382646372s to libmachine.API.Create "addons-214441"
	I0929 11:30:48.949977  595895 start.go:293] postStartSetup for "addons-214441" (driver="kvm2")
	I0929 11:30:48.949995  595895 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0929 11:30:48.950016  595895 main.go:141] libmachine: (addons-214441) Calling .DriverName
	I0929 11:30:48.950285  595895 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0929 11:30:48.950309  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHHostname
	I0929 11:30:48.952588  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:48.952941  595895 main.go:141] libmachine: (addons-214441) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:9c:d8", ip: ""} in network mk-addons-214441: {Iface:virbr1 ExpiryTime:2025-09-29 12:30:43 +0000 UTC Type:0 Mac:52:54:00:98:9c:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:addons-214441 Clientid:01:52:54:00:98:9c:d8}
	I0929 11:30:48.952973  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined IP address 192.168.39.76 and MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:48.953140  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHPort
	I0929 11:30:48.953358  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHKeyPath
	I0929 11:30:48.953522  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHUsername
	I0929 11:30:48.953678  595895 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21654-591397/.minikube/machines/addons-214441/id_rsa Username:docker}
	I0929 11:30:49.038834  595895 ssh_runner.go:195] Run: cat /etc/os-release
	I0929 11:30:49.044530  595895 info.go:137] Remote host: Buildroot 2025.02
	I0929 11:30:49.044562  595895 filesync.go:126] Scanning /home/jenkins/minikube-integration/21654-591397/.minikube/addons for local assets ...
	I0929 11:30:49.044653  595895 filesync.go:126] Scanning /home/jenkins/minikube-integration/21654-591397/.minikube/files for local assets ...
	I0929 11:30:49.044700  595895 start.go:296] duration metric: took 94.715435ms for postStartSetup
	I0929 11:30:49.044748  595895 main.go:141] libmachine: (addons-214441) Calling .GetConfigRaw
	I0929 11:30:49.045427  595895 main.go:141] libmachine: (addons-214441) Calling .GetIP
	I0929 11:30:49.048440  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:49.048801  595895 main.go:141] libmachine: (addons-214441) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:9c:d8", ip: ""} in network mk-addons-214441: {Iface:virbr1 ExpiryTime:2025-09-29 12:30:43 +0000 UTC Type:0 Mac:52:54:00:98:9c:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:addons-214441 Clientid:01:52:54:00:98:9c:d8}
	I0929 11:30:49.048825  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined IP address 192.168.39.76 and MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:49.049194  595895 profile.go:143] Saving config to /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/addons-214441/config.json ...
	I0929 11:30:49.049405  595895 start.go:128] duration metric: took 22.499712752s to createHost
	I0929 11:30:49.049432  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHHostname
	I0929 11:30:49.052122  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:49.052625  595895 main.go:141] libmachine: (addons-214441) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:9c:d8", ip: ""} in network mk-addons-214441: {Iface:virbr1 ExpiryTime:2025-09-29 12:30:43 +0000 UTC Type:0 Mac:52:54:00:98:9c:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:addons-214441 Clientid:01:52:54:00:98:9c:d8}
	I0929 11:30:49.052654  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined IP address 192.168.39.76 and MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:49.052915  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHPort
	I0929 11:30:49.053180  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHKeyPath
	I0929 11:30:49.053373  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHKeyPath
	I0929 11:30:49.053538  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHUsername
	I0929 11:30:49.053724  595895 main.go:141] libmachine: Using SSH client type: native
	I0929 11:30:49.053929  595895 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.39.76 22 <nil> <nil>}
	I0929 11:30:49.053940  595895 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0929 11:30:49.163416  595895 main.go:141] libmachine: SSH cmd err, output: <nil>: 1759145449.126116077
	
	I0929 11:30:49.163441  595895 fix.go:216] guest clock: 1759145449.126116077
	I0929 11:30:49.163449  595895 fix.go:229] Guest: 2025-09-29 11:30:49.126116077 +0000 UTC Remote: 2025-09-29 11:30:49.049418276 +0000 UTC m=+22.624163516 (delta=76.697801ms)
	I0929 11:30:49.163493  595895 fix.go:200] guest clock delta is within tolerance: 76.697801ms
	I0929 11:30:49.163499  595895 start.go:83] releasing machines lock for "addons-214441", held for 22.613874794s
	I0929 11:30:49.163528  595895 main.go:141] libmachine: (addons-214441) Calling .DriverName
	I0929 11:30:49.163838  595895 main.go:141] libmachine: (addons-214441) Calling .GetIP
	I0929 11:30:49.166822  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:49.167209  595895 main.go:141] libmachine: (addons-214441) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:9c:d8", ip: ""} in network mk-addons-214441: {Iface:virbr1 ExpiryTime:2025-09-29 12:30:43 +0000 UTC Type:0 Mac:52:54:00:98:9c:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:addons-214441 Clientid:01:52:54:00:98:9c:d8}
	I0929 11:30:49.167249  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined IP address 192.168.39.76 and MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:49.167420  595895 main.go:141] libmachine: (addons-214441) Calling .DriverName
	I0929 11:30:49.168022  595895 main.go:141] libmachine: (addons-214441) Calling .DriverName
	I0929 11:30:49.168252  595895 main.go:141] libmachine: (addons-214441) Calling .DriverName
	I0929 11:30:49.168368  595895 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0929 11:30:49.168430  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHHostname
	I0929 11:30:49.168489  595895 ssh_runner.go:195] Run: cat /version.json
	I0929 11:30:49.168513  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHHostname
	I0929 11:30:49.172018  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:49.172253  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:49.172513  595895 main.go:141] libmachine: (addons-214441) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:9c:d8", ip: ""} in network mk-addons-214441: {Iface:virbr1 ExpiryTime:2025-09-29 12:30:43 +0000 UTC Type:0 Mac:52:54:00:98:9c:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:addons-214441 Clientid:01:52:54:00:98:9c:d8}
	I0929 11:30:49.172540  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined IP address 192.168.39.76 and MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:49.172628  595895 main.go:141] libmachine: (addons-214441) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:9c:d8", ip: ""} in network mk-addons-214441: {Iface:virbr1 ExpiryTime:2025-09-29 12:30:43 +0000 UTC Type:0 Mac:52:54:00:98:9c:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:addons-214441 Clientid:01:52:54:00:98:9c:d8}
	I0929 11:30:49.172666  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined IP address 192.168.39.76 and MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:49.172701  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHPort
	I0929 11:30:49.172958  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHPort
	I0929 11:30:49.173000  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHKeyPath
	I0929 11:30:49.173136  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHKeyPath
	I0929 11:30:49.173213  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHUsername
	I0929 11:30:49.173301  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHUsername
	I0929 11:30:49.173395  595895 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21654-591397/.minikube/machines/addons-214441/id_rsa Username:docker}
	I0929 11:30:49.173457  595895 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21654-591397/.minikube/machines/addons-214441/id_rsa Username:docker}
	I0929 11:30:49.251709  595895 ssh_runner.go:195] Run: systemctl --version
	I0929 11:30:49.275600  595895 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0929 11:30:49.282636  595895 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0929 11:30:49.282710  595895 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0929 11:30:49.304880  595895 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0929 11:30:49.304913  595895 start.go:495] detecting cgroup driver to use...
	I0929 11:30:49.305043  595895 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0929 11:30:49.330757  595895 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0929 11:30:49.345061  595895 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0929 11:30:49.359226  595895 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0929 11:30:49.359329  595895 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0929 11:30:49.373874  595895 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0929 11:30:49.388075  595895 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0929 11:30:49.401811  595895 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0929 11:30:49.415626  595895 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0929 11:30:49.431189  595895 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0929 11:30:49.445445  595895 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0929 11:30:49.459477  595895 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0929 11:30:49.473176  595895 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0929 11:30:49.485689  595895 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0929 11:30:49.485783  595895 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0929 11:30:49.499975  595895 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0929 11:30:49.513013  595895 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 11:30:49.660311  595895 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0929 11:30:49.703655  595895 start.go:495] detecting cgroup driver to use...
	I0929 11:30:49.703755  595895 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0929 11:30:49.722813  595895 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0929 11:30:49.750032  595895 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0929 11:30:49.777529  595895 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0929 11:30:49.795732  595895 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0929 11:30:49.813375  595895 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0929 11:30:49.851205  595895 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0929 11:30:49.869489  595895 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0929 11:30:49.896122  595895 ssh_runner.go:195] Run: which cri-dockerd
	I0929 11:30:49.900877  595895 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0929 11:30:49.914013  595895 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I0929 11:30:49.937663  595895 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0929 11:30:50.087078  595895 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0929 11:30:50.258242  595895 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I0929 11:30:50.258407  595895 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0929 11:30:50.281600  595895 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0929 11:30:50.297843  595895 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 11:30:50.442188  595895 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0929 11:30:51.468324  595895 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.026092315s)
	I0929 11:30:51.468405  595895 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0929 11:30:51.485284  595895 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0929 11:30:51.502338  595895 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0929 11:30:51.520247  595895 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0929 11:30:51.674618  595895 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0929 11:30:51.823542  595895 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 11:30:51.969743  595895 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0929 11:30:52.010885  595895 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0929 11:30:52.027992  595895 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 11:30:52.187556  595895 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0929 11:30:52.300820  595895 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0929 11:30:52.324658  595895 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0929 11:30:52.324786  595895 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0929 11:30:52.331994  595895 start.go:563] Will wait 60s for crictl version
	I0929 11:30:52.332070  595895 ssh_runner.go:195] Run: which crictl
	I0929 11:30:52.336923  595895 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0929 11:30:52.378177  595895 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.4.0
	RuntimeApiVersion:  v1
	I0929 11:30:52.378280  595895 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0929 11:30:52.410851  595895 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0929 11:30:52.543475  595895 out.go:252] * Preparing Kubernetes v1.34.0 on Docker 28.4.0 ...
	I0929 11:30:52.543553  595895 main.go:141] libmachine: (addons-214441) Calling .GetIP
	I0929 11:30:52.546859  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:52.547288  595895 main.go:141] libmachine: (addons-214441) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:9c:d8", ip: ""} in network mk-addons-214441: {Iface:virbr1 ExpiryTime:2025-09-29 12:30:43 +0000 UTC Type:0 Mac:52:54:00:98:9c:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:addons-214441 Clientid:01:52:54:00:98:9c:d8}
	I0929 11:30:52.547313  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined IP address 192.168.39.76 and MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:52.547612  595895 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0929 11:30:52.553031  595895 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0929 11:30:52.570843  595895 kubeadm.go:875] updating cluster {Name:addons-214441 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-214
441 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.76 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fal
se DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0929 11:30:52.570982  595895 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0929 11:30:52.571045  595895 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0929 11:30:52.589813  595895 docker.go:691] Got preloaded images: 
	I0929 11:30:52.589850  595895 docker.go:697] registry.k8s.io/kube-apiserver:v1.34.0 wasn't preloaded
	I0929 11:30:52.589920  595895 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0929 11:30:52.603859  595895 ssh_runner.go:195] Run: which lz4
	I0929 11:30:52.608929  595895 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0929 11:30:52.614449  595895 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0929 11:30:52.614480  595895 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21654-591397/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (353447550 bytes)
	I0929 11:30:54.030641  595895 docker.go:655] duration metric: took 1.421784291s to copy over tarball
	I0929 11:30:54.030729  595895 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0929 11:30:55.448691  595895 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.417923545s)
	I0929 11:30:55.448737  595895 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0929 11:30:55.496341  595895 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0929 11:30:55.514175  595895 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2632 bytes)
	I0929 11:30:55.539628  595895 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0929 11:30:55.556201  595895 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 11:30:55.705196  595895 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0929 11:30:57.773379  595895 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.068131004s)
	I0929 11:30:57.773509  595895 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0929 11:30:57.795878  595895 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.0
	registry.k8s.io/kube-controller-manager:v1.34.0
	registry.k8s.io/kube-proxy:v1.34.0
	registry.k8s.io/kube-scheduler:v1.34.0
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0929 11:30:57.795910  595895 cache_images.go:85] Images are preloaded, skipping loading
	I0929 11:30:57.795931  595895 kubeadm.go:926] updating node { 192.168.39.76 8443 v1.34.0 docker true true} ...
	I0929 11:30:57.796049  595895 kubeadm.go:938] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-214441 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.76
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:addons-214441 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0929 11:30:57.796127  595895 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0929 11:30:57.852690  595895 cni.go:84] Creating CNI manager for ""
	I0929 11:30:57.852756  595895 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0929 11:30:57.852774  595895 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0929 11:30:57.852803  595895 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.76 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-214441 NodeName:addons-214441 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.76"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.76 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/ku
bernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0929 11:30:57.852981  595895 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.76
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-214441"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.76"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.76"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0929 11:30:57.853053  595895 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0929 11:30:57.866164  595895 binaries.go:44] Found k8s binaries, skipping transfer
	I0929 11:30:57.866236  595895 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0929 11:30:57.879054  595895 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0929 11:30:57.901136  595895 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0929 11:30:57.922808  595895 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2217 bytes)
	I0929 11:30:57.944391  595895 ssh_runner.go:195] Run: grep 192.168.39.76	control-plane.minikube.internal$ /etc/hosts
	I0929 11:30:57.949077  595895 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.76	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0929 11:30:57.965713  595895 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 11:30:58.115608  595895 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0929 11:30:58.151915  595895 certs.go:68] Setting up /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/addons-214441 for IP: 192.168.39.76
	I0929 11:30:58.151940  595895 certs.go:194] generating shared ca certs ...
	I0929 11:30:58.151960  595895 certs.go:226] acquiring lock for ca certs: {Name:mk707c73ecd79d5343eca8617a792346e0c7ccb4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 11:30:58.152119  595895 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21654-591397/.minikube/ca.key
	I0929 11:30:58.470474  595895 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21654-591397/.minikube/ca.crt ...
	I0929 11:30:58.470507  595895 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21654-591397/.minikube/ca.crt: {Name:mk182656d7edea57f023d2e0db199cb4225a8b4e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 11:30:58.470704  595895 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21654-591397/.minikube/ca.key ...
	I0929 11:30:58.470715  595895 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21654-591397/.minikube/ca.key: {Name:mkd9949b3876b9f68542fba6d581787f4502134f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 11:30:58.470791  595895 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21654-591397/.minikube/proxy-client-ca.key
	I0929 11:30:58.721631  595895 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21654-591397/.minikube/proxy-client-ca.crt ...
	I0929 11:30:58.721664  595895 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21654-591397/.minikube/proxy-client-ca.crt: {Name:mk28d9b982dd4335b19ce60c764e1cd1a4d53764 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 11:30:58.721838  595895 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21654-591397/.minikube/proxy-client-ca.key ...
	I0929 11:30:58.721850  595895 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21654-591397/.minikube/proxy-client-ca.key: {Name:mk92f9d60795b7f581dcb4003e857f2fb68fb997 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 11:30:58.721920  595895 certs.go:256] generating profile certs ...
	I0929 11:30:58.721989  595895 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/addons-214441/client.key
	I0929 11:30:58.722004  595895 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/addons-214441/client.crt with IP's: []
	I0929 11:30:59.043304  595895 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/addons-214441/client.crt ...
	I0929 11:30:59.043336  595895 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/addons-214441/client.crt: {Name:mkd724da95490eed1b0581ef6c65a2b1785468b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 11:30:59.043499  595895 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/addons-214441/client.key ...
	I0929 11:30:59.043510  595895 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/addons-214441/client.key: {Name:mkba543125a928af6b44a2eb304c49514c816581 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 11:30:59.043578  595895 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/addons-214441/apiserver.key.adbef5ab
	I0929 11:30:59.043598  595895 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/addons-214441/apiserver.crt.adbef5ab with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.76]
	I0929 11:30:59.456164  595895 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/addons-214441/apiserver.crt.adbef5ab ...
	I0929 11:30:59.456200  595895 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/addons-214441/apiserver.crt.adbef5ab: {Name:mk5a23687be38fbd7ef5257880d1d7f5b199f933 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 11:30:59.456424  595895 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/addons-214441/apiserver.key.adbef5ab ...
	I0929 11:30:59.456443  595895 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/addons-214441/apiserver.key.adbef5ab: {Name:mke7b9b847497d2728644e9b30a8393a50e57e5a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 11:30:59.456526  595895 certs.go:381] copying /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/addons-214441/apiserver.crt.adbef5ab -> /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/addons-214441/apiserver.crt
	I0929 11:30:59.456638  595895 certs.go:385] copying /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/addons-214441/apiserver.key.adbef5ab -> /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/addons-214441/apiserver.key
	I0929 11:30:59.456705  595895 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/addons-214441/proxy-client.key
	I0929 11:30:59.456726  595895 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/addons-214441/proxy-client.crt with IP's: []
	I0929 11:30:59.785388  595895 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/addons-214441/proxy-client.crt ...
	I0929 11:30:59.785424  595895 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/addons-214441/proxy-client.crt: {Name:mkb2afc6ab3119c9842fe1ce2f48d7c6196dbfb1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 11:30:59.785611  595895 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/addons-214441/proxy-client.key ...
	I0929 11:30:59.785642  595895 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/addons-214441/proxy-client.key: {Name:mk6b37b3ae22881d553c47031d96c6f22bdfded2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 11:30:59.785833  595895 certs.go:484] found cert: /home/jenkins/minikube-integration/21654-591397/.minikube/certs/ca-key.pem (1675 bytes)
	I0929 11:30:59.785879  595895 certs.go:484] found cert: /home/jenkins/minikube-integration/21654-591397/.minikube/certs/ca.pem (1082 bytes)
	I0929 11:30:59.785905  595895 certs.go:484] found cert: /home/jenkins/minikube-integration/21654-591397/.minikube/certs/cert.pem (1123 bytes)
	I0929 11:30:59.785932  595895 certs.go:484] found cert: /home/jenkins/minikube-integration/21654-591397/.minikube/certs/key.pem (1675 bytes)
	I0929 11:30:59.786662  595895 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21654-591397/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0929 11:30:59.821270  595895 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21654-591397/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0929 11:30:59.853588  595895 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21654-591397/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0929 11:30:59.885559  595895 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21654-591397/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0929 11:30:59.916538  595895 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/addons-214441/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0929 11:30:59.948991  595895 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/addons-214441/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0929 11:30:59.981478  595895 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/addons-214441/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0929 11:31:00.014753  595895 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/addons-214441/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0929 11:31:00.046891  595895 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21654-591397/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0929 11:31:00.079370  595895 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0929 11:31:00.101600  595895 ssh_runner.go:195] Run: openssl version
	I0929 11:31:00.108829  595895 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0929 11:31:00.123448  595895 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0929 11:31:00.129416  595895 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 29 11:30 /usr/share/ca-certificates/minikubeCA.pem
	I0929 11:31:00.129502  595895 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0929 11:31:00.137583  595895 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0929 11:31:00.152396  595895 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0929 11:31:00.157895  595895 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0929 11:31:00.157960  595895 kubeadm.go:392] StartCluster: {Name:addons-214441 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-214441
Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.76 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 11:31:00.158083  595895 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0929 11:31:00.176917  595895 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0929 11:31:00.190119  595895 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0929 11:31:00.203558  595895 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0929 11:31:00.216736  595895 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0929 11:31:00.216758  595895 kubeadm.go:157] found existing configuration files:
	
	I0929 11:31:00.216805  595895 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0929 11:31:00.229008  595895 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0929 11:31:00.229138  595895 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0929 11:31:00.242441  595895 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0929 11:31:00.254460  595895 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0929 11:31:00.254523  595895 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0929 11:31:00.268124  595895 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0929 11:31:00.284523  595895 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0929 11:31:00.284596  595895 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0929 11:31:00.297510  595895 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0929 11:31:00.311858  595895 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0929 11:31:00.311927  595895 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0929 11:31:00.329319  595895 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0929 11:31:00.392668  595895 kubeadm.go:310] [init] Using Kubernetes version: v1.34.0
	I0929 11:31:00.392776  595895 kubeadm.go:310] [preflight] Running pre-flight checks
	I0929 11:31:00.500945  595895 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0929 11:31:00.501073  595895 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0929 11:31:00.501248  595895 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0929 11:31:00.518470  595895 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0929 11:31:00.521672  595895 out.go:252]   - Generating certificates and keys ...
	I0929 11:31:00.521778  595895 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0929 11:31:00.521835  595895 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0929 11:31:00.844406  595895 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0929 11:31:01.356940  595895 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0929 11:31:01.469316  595895 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0929 11:31:01.609628  595895 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0929 11:31:01.854048  595895 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0929 11:31:01.854239  595895 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-214441 localhost] and IPs [192.168.39.76 127.0.0.1 ::1]
	I0929 11:31:02.222219  595895 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0929 11:31:02.222361  595895 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-214441 localhost] and IPs [192.168.39.76 127.0.0.1 ::1]
	I0929 11:31:02.331774  595895 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0929 11:31:02.452417  595895 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0929 11:31:03.277600  595895 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0929 11:31:03.277709  595895 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0929 11:31:03.337296  595895 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0929 11:31:03.576740  595895 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0929 11:31:03.754957  595895 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0929 11:31:04.028596  595895 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0929 11:31:04.458901  595895 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0929 11:31:04.459731  595895 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0929 11:31:04.461956  595895 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0929 11:31:04.463895  595895 out.go:252]   - Booting up control plane ...
	I0929 11:31:04.464031  595895 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0929 11:31:04.464116  595895 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0929 11:31:04.464220  595895 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0929 11:31:04.482430  595895 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0929 11:31:04.482595  595895 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0929 11:31:04.490659  595895 kubeadm.go:310] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0929 11:31:04.490827  595895 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0929 11:31:04.490920  595895 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0929 11:31:04.666361  595895 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0929 11:31:04.666495  595895 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0929 11:31:05.175870  595895 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 510.006022ms
	I0929 11:31:05.187944  595895 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0929 11:31:05.188057  595895 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.39.76:8443/livez
	I0929 11:31:05.188256  595895 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0929 11:31:05.188362  595895 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0929 11:31:07.767053  595895 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 2.579446651s
	I0929 11:31:09.215755  595895 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 4.029766048s
	I0929 11:31:11.189186  595895 kubeadm.go:310] [control-plane-check] kube-apiserver is healthy after 6.002998119s
	I0929 11:31:11.214239  595895 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0929 11:31:11.232892  595895 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0929 11:31:11.255389  595895 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0929 11:31:11.255580  595895 kubeadm.go:310] [mark-control-plane] Marking the node addons-214441 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0929 11:31:11.270844  595895 kubeadm.go:310] [bootstrap-token] Using token: 7wgemt.sdnt4jx2dgy9ll51
	I0929 11:31:11.272442  595895 out.go:252]   - Configuring RBAC rules ...
	I0929 11:31:11.272557  595895 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0929 11:31:11.279364  595895 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0929 11:31:11.294463  595895 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0929 11:31:11.298793  595895 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0929 11:31:11.306582  595895 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0929 11:31:11.323727  595895 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0929 11:31:11.601710  595895 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0929 11:31:12.069553  595895 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0929 11:31:12.597044  595895 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0929 11:31:12.597931  595895 kubeadm.go:310] 
	I0929 11:31:12.598017  595895 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0929 11:31:12.598026  595895 kubeadm.go:310] 
	I0929 11:31:12.598142  595895 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0929 11:31:12.598153  595895 kubeadm.go:310] 
	I0929 11:31:12.598181  595895 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0929 11:31:12.598281  595895 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0929 11:31:12.598374  595895 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0929 11:31:12.598390  595895 kubeadm.go:310] 
	I0929 11:31:12.598436  595895 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0929 11:31:12.598442  595895 kubeadm.go:310] 
	I0929 11:31:12.598481  595895 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0929 11:31:12.598497  595895 kubeadm.go:310] 
	I0929 11:31:12.598577  595895 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0929 11:31:12.598692  595895 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0929 11:31:12.598809  595895 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0929 11:31:12.598828  595895 kubeadm.go:310] 
	I0929 11:31:12.598937  595895 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0929 11:31:12.599041  595895 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0929 11:31:12.599055  595895 kubeadm.go:310] 
	I0929 11:31:12.599196  595895 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 7wgemt.sdnt4jx2dgy9ll51 \
	I0929 11:31:12.599332  595895 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f2c60916dfe453d33b3c30594ac6ac02ea520a59a4bdac7119cd9e587175e1eb \
	I0929 11:31:12.599365  595895 kubeadm.go:310] 	--control-plane 
	I0929 11:31:12.599397  595895 kubeadm.go:310] 
	I0929 11:31:12.599486  595895 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0929 11:31:12.599496  595895 kubeadm.go:310] 
	I0929 11:31:12.599568  595895 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 7wgemt.sdnt4jx2dgy9ll51 \
	I0929 11:31:12.599705  595895 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f2c60916dfe453d33b3c30594ac6ac02ea520a59a4bdac7119cd9e587175e1eb 
	I0929 11:31:12.601217  595895 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0929 11:31:12.601272  595895 cni.go:84] Creating CNI manager for ""
	I0929 11:31:12.601305  595895 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0929 11:31:12.603223  595895 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I0929 11:31:12.604766  595895 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0929 11:31:12.618554  595895 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0929 11:31:12.641768  595895 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0929 11:31:12.641942  595895 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 11:31:12.641954  595895 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-214441 minikube.k8s.io/updated_at=2025_09_29T11_31_12_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=b8c533eb3dce8338d3d4e7231ea97d1c44ed6f81 minikube.k8s.io/name=addons-214441 minikube.k8s.io/primary=true
	I0929 11:31:12.682767  595895 ops.go:34] apiserver oom_adj: -16
	I0929 11:31:12.800130  595895 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 11:31:13.300439  595895 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 11:31:13.800339  595895 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 11:31:14.300644  595895 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 11:31:14.800381  595895 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 11:31:15.301049  595895 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 11:31:15.801207  595895 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 11:31:16.301226  595895 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 11:31:16.801024  595895 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 11:31:17.300849  595895 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 11:31:17.440215  595895 kubeadm.go:1105] duration metric: took 4.798376612s to wait for elevateKubeSystemPrivileges
	I0929 11:31:17.440271  595895 kubeadm.go:394] duration metric: took 17.282308974s to StartCluster
	I0929 11:31:17.440297  595895 settings.go:142] acquiring lock: {Name:mk832bb073af4ae47756dd4494ea087d7aa99c2a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 11:31:17.440448  595895 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21654-591397/kubeconfig
	I0929 11:31:17.441186  595895 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21654-591397/kubeconfig: {Name:mk64b4db01785e3abeedb000f7d1263b1f56db2d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 11:31:17.441409  595895 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0929 11:31:17.441416  595895 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.76 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0929 11:31:17.441496  595895 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0929 11:31:17.441684  595895 config.go:182] Loaded profile config "addons-214441": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0929 11:31:17.441696  595895 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-214441"
	I0929 11:31:17.441708  595895 addons.go:69] Setting yakd=true in profile "addons-214441"
	I0929 11:31:17.441736  595895 addons.go:238] Setting addon yakd=true in "addons-214441"
	I0929 11:31:17.441757  595895 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-214441"
	I0929 11:31:17.441709  595895 addons.go:69] Setting ingress=true in profile "addons-214441"
	I0929 11:31:17.441784  595895 addons.go:238] Setting addon ingress=true in "addons-214441"
	I0929 11:31:17.441793  595895 host.go:66] Checking if "addons-214441" exists ...
	I0929 11:31:17.441803  595895 host.go:66] Checking if "addons-214441" exists ...
	I0929 11:31:17.441799  595895 addons.go:69] Setting default-storageclass=true in profile "addons-214441"
	I0929 11:31:17.441840  595895 addons.go:69] Setting gcp-auth=true in profile "addons-214441"
	I0929 11:31:17.441876  595895 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-214441"
	I0929 11:31:17.441886  595895 mustload.go:65] Loading cluster: addons-214441
	I0929 11:31:17.441893  595895 host.go:66] Checking if "addons-214441" exists ...
	I0929 11:31:17.442145  595895 addons.go:69] Setting registry=true in profile "addons-214441"
	I0929 11:31:17.442160  595895 addons.go:238] Setting addon registry=true in "addons-214441"
	I0929 11:31:17.442191  595895 host.go:66] Checking if "addons-214441" exists ...
	I0929 11:31:17.442280  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:31:17.442300  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:31:17.442353  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:31:17.442366  595895 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-214441"
	I0929 11:31:17.442371  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:31:17.442380  595895 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-214441"
	I0929 11:31:17.442381  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:31:17.442385  595895 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-214441"
	I0929 11:31:17.442396  595895 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-214441"
	I0929 11:31:17.442399  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:31:17.442425  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:31:17.442400  595895 addons.go:69] Setting cloud-spanner=true in profile "addons-214441"
	I0929 11:31:17.442448  595895 addons.go:69] Setting registry-creds=true in profile "addons-214441"
	I0929 11:31:17.442456  595895 addons.go:238] Setting addon cloud-spanner=true in "addons-214441"
	I0929 11:31:17.442469  595895 addons.go:238] Setting addon registry-creds=true in "addons-214441"
	I0929 11:31:17.442478  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:31:17.442491  595895 addons.go:69] Setting storage-provisioner=true in profile "addons-214441"
	I0929 11:31:17.442514  595895 addons.go:238] Setting addon storage-provisioner=true in "addons-214441"
	I0929 11:31:17.442543  595895 host.go:66] Checking if "addons-214441" exists ...
	I0929 11:31:17.442544  595895 addons.go:69] Setting inspektor-gadget=true in profile "addons-214441"
	I0929 11:31:17.442557  595895 addons.go:238] Setting addon inspektor-gadget=true in "addons-214441"
	I0929 11:31:17.442563  595895 addons.go:69] Setting ingress-dns=true in profile "addons-214441"
	I0929 11:31:17.442575  595895 addons.go:238] Setting addon ingress-dns=true in "addons-214441"
	I0929 11:31:17.442588  595895 addons.go:69] Setting metrics-server=true in profile "addons-214441"
	I0929 11:31:17.442591  595895 addons.go:69] Setting volumesnapshots=true in profile "addons-214441"
	I0929 11:31:17.442599  595895 addons.go:238] Setting addon metrics-server=true in "addons-214441"
	I0929 11:31:17.442610  595895 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-214441"
	I0929 11:31:17.442602  595895 config.go:182] Loaded profile config "addons-214441": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0929 11:31:17.442620  595895 addons.go:238] Setting addon volumesnapshots=true in "addons-214441"
	I0929 11:31:17.442622  595895 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-214441"
	I0929 11:31:17.442631  595895 addons.go:69] Setting volcano=true in profile "addons-214441"
	I0929 11:31:17.442647  595895 addons.go:238] Setting addon volcano=true in "addons-214441"
	I0929 11:31:17.442826  595895 host.go:66] Checking if "addons-214441" exists ...
	I0929 11:31:17.442847  595895 host.go:66] Checking if "addons-214441" exists ...
	I0929 11:31:17.442963  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:31:17.443004  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:31:17.443177  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:31:17.443198  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:31:17.443212  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:31:17.443242  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:31:17.443255  595895 host.go:66] Checking if "addons-214441" exists ...
	I0929 11:31:17.443270  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:31:17.443292  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:31:17.443439  595895 out.go:179] * Verifying Kubernetes components...
	I0929 11:31:17.443489  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:31:17.443521  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:31:17.443564  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:31:17.443603  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:31:17.443459  595895 host.go:66] Checking if "addons-214441" exists ...
	I0929 11:31:17.443699  595895 host.go:66] Checking if "addons-214441" exists ...
	I0929 11:31:17.443852  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:31:17.443879  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:31:17.443895  595895 host.go:66] Checking if "addons-214441" exists ...
	I0929 11:31:17.444137  595895 host.go:66] Checking if "addons-214441" exists ...
	I0929 11:31:17.444199  595895 host.go:66] Checking if "addons-214441" exists ...
	I0929 11:31:17.444468  595895 host.go:66] Checking if "addons-214441" exists ...
	I0929 11:31:17.454269  595895 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 11:31:17.455462  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:31:17.455556  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:31:17.457160  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:31:17.457213  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:31:17.458697  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:31:17.458765  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:31:17.459732  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37039
	I0929 11:31:17.459901  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:31:17.459979  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:31:17.460127  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:31:17.460161  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:31:17.460170  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:31:17.460239  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:31:17.460291  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44679
	I0929 11:31:17.460695  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:31:17.463901  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:31:17.463928  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:31:17.464092  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:31:17.465162  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:31:17.465408  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:31:17.466171  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:31:17.466824  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:31:17.467158  595895 main.go:141] libmachine: (addons-214441) Calling .GetState
	I0929 11:31:17.479447  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:31:17.479512  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:31:17.482323  595895 addons.go:238] Setting addon default-storageclass=true in "addons-214441"
	I0929 11:31:17.482391  595895 host.go:66] Checking if "addons-214441" exists ...
	I0929 11:31:17.482773  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:31:17.482798  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:31:17.493064  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45797
	I0929 11:31:17.493710  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40233
	I0929 11:31:17.496980  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:31:17.497697  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:31:17.497723  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:31:17.498583  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:31:17.499544  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:31:17.500891  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:31:17.502188  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:31:17.503325  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:31:17.503345  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:31:17.503676  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33541
	I0929 11:31:17.503826  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:31:17.504644  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:31:17.504730  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:31:17.505209  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:31:17.506256  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:31:17.506279  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:31:17.506340  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36303
	I0929 11:31:17.506984  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46129
	I0929 11:31:17.507294  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:31:17.507677  595895 main.go:141] libmachine: (addons-214441) Calling .GetState
	I0929 11:31:17.507745  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37135
	I0929 11:31:17.508552  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:31:17.509057  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:31:17.509394  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:31:17.509407  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:31:17.509415  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:31:17.510041  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:31:17.510142  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:31:17.510163  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:31:17.511579  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:31:17.513259  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:31:17.513521  595895 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-214441"
	I0929 11:31:17.513538  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46151
	I0929 11:31:17.513575  595895 host.go:66] Checking if "addons-214441" exists ...
	I0929 11:31:17.514124  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:31:17.514166  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:31:17.511927  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:31:17.514352  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:31:17.513596  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42277
	I0929 11:31:17.520718  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:31:17.520752  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44361
	I0929 11:31:17.521039  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:31:17.521092  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33947
	I0929 11:31:17.521207  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43367
	I0929 11:31:17.520724  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37739
	I0929 11:31:17.522317  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:31:17.522444  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:31:17.522469  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:31:17.522507  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:31:17.522852  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:31:17.522920  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:31:17.523211  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:31:17.523225  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:31:17.523306  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:31:17.523461  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:31:17.523473  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:31:17.524082  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:31:17.524376  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:31:17.524523  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:31:17.524535  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:31:17.524631  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:31:17.524746  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34633
	I0929 11:31:17.529249  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:31:17.529354  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:31:17.529387  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:31:17.529766  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:31:17.529766  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:31:17.529799  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:31:17.529807  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:31:17.529908  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:31:17.530061  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:31:17.530343  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:31:17.530371  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:31:17.530465  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:31:17.530878  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:31:17.530932  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:31:17.531382  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:31:17.531639  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:31:17.531658  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:31:17.532124  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:31:17.532483  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:31:17.533015  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:31:17.533033  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:31:17.533472  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:31:17.533508  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:31:17.534270  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:31:17.535229  595895 main.go:141] libmachine: (addons-214441) Calling .GetState
	I0929 11:31:17.535779  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:31:17.535886  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:31:17.537511  595895 host.go:66] Checking if "addons-214441" exists ...
	I0929 11:31:17.538187  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39035
	I0929 11:31:17.539952  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:31:17.540005  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:31:17.540222  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39063
	I0929 11:31:17.540575  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43245
	I0929 11:31:17.540786  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:31:17.540854  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:31:17.540890  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:31:17.541625  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:31:17.541647  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:31:17.542032  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:31:17.542195  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:31:17.542600  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:31:17.543176  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:31:17.543185  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:31:17.543199  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:31:17.543204  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:31:17.543307  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32865
	I0929 11:31:17.544136  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:31:17.544545  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:31:17.544610  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:31:17.544640  595895 main.go:141] libmachine: (addons-214441) Calling .GetState
	I0929 11:31:17.545415  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:31:17.545449  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:31:17.546464  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:31:17.546490  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:31:17.546965  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:31:17.547387  595895 main.go:141] libmachine: (addons-214441) Calling .GetState
	I0929 11:31:17.548714  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:31:17.548795  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:31:17.550669  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38149
	I0929 11:31:17.551412  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:31:17.551773  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34225
	I0929 11:31:17.552171  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:31:17.552255  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:31:17.552199  595895 main.go:141] libmachine: (addons-214441) Calling .DriverName
	I0929 11:31:17.552753  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:31:17.552854  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:31:17.553685  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:31:17.553778  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:31:17.554307  595895 main.go:141] libmachine: (addons-214441) Calling .GetState
	I0929 11:31:17.554514  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:31:17.555149  595895 main.go:141] libmachine: (addons-214441) Calling .GetState
	I0929 11:31:17.557383  595895 main.go:141] libmachine: (addons-214441) Calling .DriverName
	I0929 11:31:17.558025  595895 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0929 11:31:17.559210  595895 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0929 11:31:17.559231  595895 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0929 11:31:17.559262  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHHostname
	I0929 11:31:17.559338  595895 out.go:179]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.12.2
	I0929 11:31:17.560620  595895 main.go:141] libmachine: (addons-214441) Calling .DriverName
	I0929 11:31:17.560681  595895 out.go:179]   - Using image docker.io/volcanosh/vc-controller-manager:v1.12.2
	I0929 11:31:17.560823  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34665
	I0929 11:31:17.561393  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:31:17.562236  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:31:17.562295  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:31:17.562751  595895 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.3
	I0929 11:31:17.563140  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:31:17.563492  595895 out.go:179]   - Using image docker.io/volcanosh/vc-scheduler:v1.12.2
	I0929 11:31:17.564252  595895 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0929 11:31:17.564269  595895 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0929 11:31:17.564289  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:31:17.564293  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHHostname
	I0929 11:31:17.564684  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:31:17.564737  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:31:17.565023  595895 main.go:141] libmachine: (addons-214441) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:9c:d8", ip: ""} in network mk-addons-214441: {Iface:virbr1 ExpiryTime:2025-09-29 12:30:43 +0000 UTC Type:0 Mac:52:54:00:98:9c:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:addons-214441 Clientid:01:52:54:00:98:9c:d8}
	I0929 11:31:17.565146  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined IP address 192.168.39.76 and MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:31:17.567800  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHPort
	I0929 11:31:17.568057  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHKeyPath
	I0929 11:31:17.568262  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36805
	I0929 11:31:17.568522  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHUsername
	I0929 11:31:17.568701  595895 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21654-591397/.minikube/machines/addons-214441/id_rsa Username:docker}
	I0929 11:31:17.569229  595895 addons.go:435] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0929 11:31:17.569253  595895 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (498149 bytes)
	I0929 11:31:17.569273  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHHostname
	I0929 11:31:17.569959  595895 main.go:141] libmachine: (addons-214441) Calling .DriverName
	I0929 11:31:17.570047  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:31:17.572257  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:31:17.572409  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:31:17.572423  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:31:17.573470  595895 main.go:141] libmachine: (addons-214441) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:9c:d8", ip: ""} in network mk-addons-214441: {Iface:virbr1 ExpiryTime:2025-09-29 12:30:43 +0000 UTC Type:0 Mac:52:54:00:98:9c:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:addons-214441 Clientid:01:52:54:00:98:9c:d8}
	I0929 11:31:17.573495  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined IP address 192.168.39.76 and MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:31:17.573534  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42077
	I0929 11:31:17.574161  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:31:17.574166  595895 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I0929 11:31:17.574420  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:31:17.574975  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:31:17.575036  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:31:17.575329  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHPort
	I0929 11:31:17.575415  595895 main.go:141] libmachine: (addons-214441) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:9c:d8", ip: ""} in network mk-addons-214441: {Iface:virbr1 ExpiryTime:2025-09-29 12:30:43 +0000 UTC Type:0 Mac:52:54:00:98:9c:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:addons-214441 Clientid:01:52:54:00:98:9c:d8}
	I0929 11:31:17.575430  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined IP address 192.168.39.76 and MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:31:17.575671  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHKeyPath
	I0929 11:31:17.575865  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHUsername
	I0929 11:31:17.576099  595895 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21654-591397/.minikube/machines/addons-214441/id_rsa Username:docker}
	I0929 11:31:17.577061  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHPort
	I0929 11:31:17.577247  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHKeyPath
	I0929 11:31:17.577378  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHUsername
	I0929 11:31:17.577535  595895 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21654-591397/.minikube/machines/addons-214441/id_rsa Username:docker}
	I0929 11:31:17.577554  595895 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0929 11:31:17.577582  595895 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0929 11:31:17.577605  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHHostname
	I0929 11:31:17.579736  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40239
	I0929 11:31:17.580597  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:31:17.581383  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:31:17.581446  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:31:17.582289  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:31:17.582694  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44947
	I0929 11:31:17.582952  595895 main.go:141] libmachine: (addons-214441) Calling .DriverName
	I0929 11:31:17.583853  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:31:17.585630  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46433
	I0929 11:31:17.585637  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38605
	I0929 11:31:17.586733  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:31:17.586755  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:31:17.586846  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:31:17.587240  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:31:17.587458  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:31:17.587548  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:31:17.587503  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37799
	I0929 11:31:17.588342  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:31:17.588817  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:31:17.588838  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:31:17.589534  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35991
	I0929 11:31:17.589680  595895 main.go:141] libmachine: (addons-214441) Calling .GetState
	I0929 11:31:17.589727  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:31:17.589953  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:31:17.590461  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:31:17.590684  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:31:17.590701  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:31:17.590814  595895 main.go:141] libmachine: (addons-214441) Calling .GetState
	I0929 11:31:17.590864  595895 main.go:141] libmachine: (addons-214441) Calling .GetState
	I0929 11:31:17.591866  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:31:17.592243  595895 main.go:141] libmachine: (addons-214441) Calling .GetState
	I0929 11:31:17.592985  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:31:17.593774  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:31:17.593791  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:31:17.594759  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:31:17.595210  595895 main.go:141] libmachine: (addons-214441) Calling .GetState
	I0929 11:31:17.595390  595895 main.go:141] libmachine: (addons-214441) Calling .DriverName
	I0929 11:31:17.596824  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:31:17.597871  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:31:17.598227  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:31:17.598762  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35305
	I0929 11:31:17.599344  595895 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0929 11:31:17.600871  595895 main.go:141] libmachine: (addons-214441) Calling .DriverName
	I0929 11:31:17.600871  595895 main.go:141] libmachine: (addons-214441) Calling .DriverName
	I0929 11:31:17.600871  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:31:17.600928  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:31:17.600961  595895 main.go:141] libmachine: (addons-214441) Calling .DriverName
	I0929 11:31:17.600994  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39573
	I0929 11:31:17.601002  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43843
	I0929 11:31:17.601641  595895 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0929 11:31:17.601827  595895 main.go:141] libmachine: (addons-214441) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:9c:d8", ip: ""} in network mk-addons-214441: {Iface:virbr1 ExpiryTime:2025-09-29 12:30:43 +0000 UTC Type:0 Mac:52:54:00:98:9c:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:addons-214441 Clientid:01:52:54:00:98:9c:d8}
	I0929 11:31:17.601850  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined IP address 192.168.39.76 and MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:31:17.601913  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:31:17.602052  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:31:17.602151  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39993
	I0929 11:31:17.602155  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:31:17.602306  595895 main.go:141] libmachine: (addons-214441) Calling .GetState
	I0929 11:31:17.602590  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:31:17.602610  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:31:17.602811  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHPort
	I0929 11:31:17.602977  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHKeyPath
	I0929 11:31:17.603038  595895 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0929 11:31:17.603089  595895 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0929 11:31:17.603260  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHUsername
	I0929 11:31:17.603328  595895 main.go:141] libmachine: (addons-214441) Calling .DriverName
	I0929 11:31:17.603564  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:31:17.603593  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:31:17.603752  595895 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21654-591397/.minikube/machines/addons-214441/id_rsa Username:docker}
	I0929 11:31:17.604258  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:31:17.604320  595895 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I0929 11:31:17.604825  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:31:17.604525  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:31:17.605686  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:31:17.605694  595895 main.go:141] libmachine: (addons-214441) Calling .DriverName
	I0929 11:31:17.604846  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:31:17.604946  595895 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0929 11:31:17.605125  595895 main.go:141] libmachine: (addons-214441) Calling .GetState
	I0929 11:31:17.606062  595895 main.go:141] libmachine: (addons-214441) Calling .GetState
	I0929 11:31:17.606154  595895 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.44.1
	I0929 11:31:17.606169  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:31:17.606174  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:31:17.607283  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:31:17.607459  595895 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0929 11:31:17.607513  595895 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0929 11:31:17.608000  595895 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I0929 11:31:17.608022  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHHostname
	I0929 11:31:17.607722  595895 main.go:141] libmachine: (addons-214441) Calling .GetState
	I0929 11:31:17.607825  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:31:17.608327  595895 out.go:179]   - Using image docker.io/busybox:stable
	I0929 11:31:17.608504  595895 main.go:141] libmachine: (addons-214441) Calling .GetState
	I0929 11:31:17.609208  595895 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0929 11:31:17.609380  595895 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I0929 11:31:17.609617  595895 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I0929 11:31:17.609695  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHHostname
	I0929 11:31:17.609885  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35811
	I0929 11:31:17.610214  595895 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0929 11:31:17.610480  595895 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0929 11:31:17.610442  595895 main.go:141] libmachine: (addons-214441) Calling .DriverName
	I0929 11:31:17.610634  595895 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 11:31:17.610651  595895 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0929 11:31:17.610666  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHHostname
	I0929 11:31:17.610637  595895 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0929 11:31:17.610551  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHHostname
	I0929 11:31:17.611056  595895 main.go:141] libmachine: (addons-214441) Calling .DriverName
	I0929 11:31:17.611127  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:31:17.611242  595895 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.2
	I0929 11:31:17.612177  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:31:17.612200  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:31:17.612367  595895 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I0929 11:31:17.612539  595895 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.41
	I0929 11:31:17.612558  595895 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0929 11:31:17.612574  595895 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0929 11:31:17.612702  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHHostname
	I0929 11:31:17.612652  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:31:17.613066  595895 main.go:141] libmachine: (addons-214441) Calling .DriverName
	I0929 11:31:17.613132  595895 main.go:141] libmachine: (addons-214441) Calling .GetState
	I0929 11:31:17.613978  595895 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I0929 11:31:17.614058  595895 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0929 11:31:17.614157  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHHostname
	I0929 11:31:17.614015  595895 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I0929 11:31:17.614286  595895 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I0929 11:31:17.614314  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHHostname
	I0929 11:31:17.614339  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40205
	I0929 11:31:17.614532  595895 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0929 11:31:17.614774  595895 main.go:141] libmachine: (addons-214441) Calling .DriverName
	I0929 11:31:17.614918  595895 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0929 11:31:17.615384  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:31:17.615994  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:31:17.616036  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:31:17.616065  595895 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I0929 11:31:17.616139  595895 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0929 11:31:17.616150  595895 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0929 11:31:17.616217  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHHostname
	I0929 11:31:17.616451  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:31:17.616766  595895 main.go:141] libmachine: (addons-214441) Calling .GetState
	I0929 11:31:17.617254  595895 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0929 11:31:17.618390  595895 main.go:141] libmachine: (addons-214441) Calling .DriverName
	I0929 11:31:17.618595  595895 out.go:179]   - Using image docker.io/registry:3.0.0
	I0929 11:31:17.619658  595895 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I0929 11:31:17.619715  595895 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I0929 11:31:17.619728  595895 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0929 11:31:17.619752  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHHostname
	I0929 11:31:17.619788  595895 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0929 11:31:17.620191  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:31:17.620909  595895 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0929 11:31:17.620926  595895 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I0929 11:31:17.621015  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHHostname
	I0929 11:31:17.621216  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:31:17.622235  595895 main.go:141] libmachine: (addons-214441) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:9c:d8", ip: ""} in network mk-addons-214441: {Iface:virbr1 ExpiryTime:2025-09-29 12:30:43 +0000 UTC Type:0 Mac:52:54:00:98:9c:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:addons-214441 Clientid:01:52:54:00:98:9c:d8}
	I0929 11:31:17.622260  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined IP address 192.168.39.76 and MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:31:17.622296  595895 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0929 11:31:17.622987  595895 main.go:141] libmachine: (addons-214441) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:9c:d8", ip: ""} in network mk-addons-214441: {Iface:virbr1 ExpiryTime:2025-09-29 12:30:43 +0000 UTC Type:0 Mac:52:54:00:98:9c:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:addons-214441 Clientid:01:52:54:00:98:9c:d8}
	I0929 11:31:17.623010  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined IP address 192.168.39.76 and MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:31:17.623146  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHPort
	I0929 11:31:17.623384  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHKeyPath
	I0929 11:31:17.623851  595895 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0929 11:31:17.623870  595895 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0929 11:31:17.623891  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHHostname
	I0929 11:31:17.623910  595895 main.go:141] libmachine: (addons-214441) Calling .DriverName
	I0929 11:31:17.623977  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHUsername
	I0929 11:31:17.623991  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:31:17.624284  595895 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0929 11:31:17.624300  595895 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0929 11:31:17.624317  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHHostname
	I0929 11:31:17.624324  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHPort
	I0929 11:31:17.624330  595895 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21654-591397/.minikube/machines/addons-214441/id_rsa Username:docker}
	I0929 11:31:17.624655  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:31:17.624690  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHKeyPath
	I0929 11:31:17.625088  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHUsername
	I0929 11:31:17.625297  595895 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21654-591397/.minikube/machines/addons-214441/id_rsa Username:docker}
	I0929 11:31:17.626099  595895 main.go:141] libmachine: (addons-214441) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:9c:d8", ip: ""} in network mk-addons-214441: {Iface:virbr1 ExpiryTime:2025-09-29 12:30:43 +0000 UTC Type:0 Mac:52:54:00:98:9c:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:addons-214441 Clientid:01:52:54:00:98:9c:d8}
	I0929 11:31:17.626182  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined IP address 192.168.39.76 and MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:31:17.626247  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:31:17.626251  595895 main.go:141] libmachine: (addons-214441) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:9c:d8", ip: ""} in network mk-addons-214441: {Iface:virbr1 ExpiryTime:2025-09-29 12:30:43 +0000 UTC Type:0 Mac:52:54:00:98:9c:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:addons-214441 Clientid:01:52:54:00:98:9c:d8}
	I0929 11:31:17.626597  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined IP address 192.168.39.76 and MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:31:17.626789  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:31:17.626890  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHPort
	I0929 11:31:17.627091  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHPort
	I0929 11:31:17.627238  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHKeyPath
	I0929 11:31:17.627284  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHKeyPath
	I0929 11:31:17.627374  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHUsername
	I0929 11:31:17.627541  595895 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21654-591397/.minikube/machines/addons-214441/id_rsa Username:docker}
	I0929 11:31:17.627907  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:31:17.627938  595895 main.go:141] libmachine: (addons-214441) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:9c:d8", ip: ""} in network mk-addons-214441: {Iface:virbr1 ExpiryTime:2025-09-29 12:30:43 +0000 UTC Type:0 Mac:52:54:00:98:9c:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:addons-214441 Clientid:01:52:54:00:98:9c:d8}
	I0929 11:31:17.627949  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined IP address 192.168.39.76 and MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:31:17.627979  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHUsername
	I0929 11:31:17.628066  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:31:17.628081  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHPort
	I0929 11:31:17.628268  595895 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21654-591397/.minikube/machines/addons-214441/id_rsa Username:docker}
	I0929 11:31:17.628308  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHKeyPath
	I0929 11:31:17.628533  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHUsername
	I0929 11:31:17.628572  595895 main.go:141] libmachine: (addons-214441) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:9c:d8", ip: ""} in network mk-addons-214441: {Iface:virbr1 ExpiryTime:2025-09-29 12:30:43 +0000 UTC Type:0 Mac:52:54:00:98:9c:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:addons-214441 Clientid:01:52:54:00:98:9c:d8}
	I0929 11:31:17.628735  595895 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21654-591397/.minikube/machines/addons-214441/id_rsa Username:docker}
	I0929 11:31:17.628848  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined IP address 192.168.39.76 and MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:31:17.629214  595895 main.go:141] libmachine: (addons-214441) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:9c:d8", ip: ""} in network mk-addons-214441: {Iface:virbr1 ExpiryTime:2025-09-29 12:30:43 +0000 UTC Type:0 Mac:52:54:00:98:9c:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:addons-214441 Clientid:01:52:54:00:98:9c:d8}
	I0929 11:31:17.629249  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined IP address 192.168.39.76 and MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:31:17.629266  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHPort
	I0929 11:31:17.629512  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHKeyPath
	I0929 11:31:17.629592  595895 main.go:141] libmachine: (addons-214441) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:9c:d8", ip: ""} in network mk-addons-214441: {Iface:virbr1 ExpiryTime:2025-09-29 12:30:43 +0000 UTC Type:0 Mac:52:54:00:98:9c:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:addons-214441 Clientid:01:52:54:00:98:9c:d8}
	I0929 11:31:17.629606  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined IP address 192.168.39.76 and MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:31:17.629764  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHUsername
	I0929 11:31:17.629861  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHPort
	I0929 11:31:17.630008  595895 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21654-591397/.minikube/machines/addons-214441/id_rsa Username:docker}
	I0929 11:31:17.630062  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHKeyPath
	I0929 11:31:17.630142  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHPort
	I0929 11:31:17.630197  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHUsername
	I0929 11:31:17.630311  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHKeyPath
	I0929 11:31:17.630370  595895 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21654-591397/.minikube/machines/addons-214441/id_rsa Username:docker}
	I0929 11:31:17.630910  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHUsername
	I0929 11:31:17.631305  595895 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21654-591397/.minikube/machines/addons-214441/id_rsa Username:docker}
	I0929 11:31:17.631821  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:31:17.632272  595895 main.go:141] libmachine: (addons-214441) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:9c:d8", ip: ""} in network mk-addons-214441: {Iface:virbr1 ExpiryTime:2025-09-29 12:30:43 +0000 UTC Type:0 Mac:52:54:00:98:9c:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:addons-214441 Clientid:01:52:54:00:98:9c:d8}
	I0929 11:31:17.632296  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined IP address 192.168.39.76 and MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:31:17.632442  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:31:17.632503  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHPort
	I0929 11:31:17.632710  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:31:17.632789  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHKeyPath
	I0929 11:31:17.633084  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHUsername
	I0929 11:31:17.633162  595895 main.go:141] libmachine: (addons-214441) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:9c:d8", ip: ""} in network mk-addons-214441: {Iface:virbr1 ExpiryTime:2025-09-29 12:30:43 +0000 UTC Type:0 Mac:52:54:00:98:9c:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:addons-214441 Clientid:01:52:54:00:98:9c:d8}
	I0929 11:31:17.633176  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined IP address 192.168.39.76 and MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:31:17.633207  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:31:17.633228  595895 main.go:141] libmachine: (addons-214441) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:9c:d8", ip: ""} in network mk-addons-214441: {Iface:virbr1 ExpiryTime:2025-09-29 12:30:43 +0000 UTC Type:0 Mac:52:54:00:98:9c:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:addons-214441 Clientid:01:52:54:00:98:9c:d8}
	I0929 11:31:17.633242  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined IP address 192.168.39.76 and MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:31:17.633391  595895 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21654-591397/.minikube/machines/addons-214441/id_rsa Username:docker}
	I0929 11:31:17.633435  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHPort
	I0929 11:31:17.633557  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHPort
	I0929 11:31:17.633619  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHKeyPath
	I0929 11:31:17.633759  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHKeyPath
	I0929 11:31:17.633793  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHUsername
	I0929 11:31:17.634042  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHUsername
	I0929 11:31:17.634042  595895 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21654-591397/.minikube/machines/addons-214441/id_rsa Username:docker}
	I0929 11:31:17.634131  595895 main.go:141] libmachine: (addons-214441) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:9c:d8", ip: ""} in network mk-addons-214441: {Iface:virbr1 ExpiryTime:2025-09-29 12:30:43 +0000 UTC Type:0 Mac:52:54:00:98:9c:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:addons-214441 Clientid:01:52:54:00:98:9c:d8}
	I0929 11:31:17.634164  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined IP address 192.168.39.76 and MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:31:17.634219  595895 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21654-591397/.minikube/machines/addons-214441/id_rsa Username:docker}
	I0929 11:31:17.634716  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHPort
	I0929 11:31:17.634894  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHKeyPath
	I0929 11:31:17.635088  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHUsername
	I0929 11:31:17.635265  595895 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21654-591397/.minikube/machines/addons-214441/id_rsa Username:docker}
	W0929 11:31:17.919750  595895 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:44698->192.168.39.76:22: read: connection reset by peer
	I0929 11:31:17.919798  595895 retry.go:31] will retry after 127.603101ms: ssh: handshake failed: read tcp 192.168.39.1:44698->192.168.39.76:22: read: connection reset by peer
	W0929 11:31:17.927998  595895 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:44716->192.168.39.76:22: read: connection reset by peer
	I0929 11:31:17.928034  595895 retry.go:31] will retry after 352.316454ms: ssh: handshake failed: read tcp 192.168.39.1:44716->192.168.39.76:22: read: connection reset by peer
	I0929 11:31:18.834850  595895 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0929 11:31:18.834892  595895 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0929 11:31:18.867206  595895 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0929 11:31:18.867237  595895 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0929 11:31:18.998018  595895 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0929 11:31:19.019969  595895 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (1.57851512s)
	I0929 11:31:19.019988  595895 ssh_runner.go:235] Completed: sudo systemctl daemon-reload: (1.56567428s)
	I0929 11:31:19.020058  595895 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0929 11:31:19.020195  595895 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0929 11:31:19.047383  595895 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0929 11:31:19.178551  595895 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0929 11:31:19.194460  595895 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0929 11:31:19.203493  595895 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 11:31:19.224634  595895 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0929 11:31:19.236908  595895 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I0929 11:31:19.236937  595895 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I0929 11:31:19.339094  595895 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0929 11:31:19.470368  595895 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0929 11:31:19.470407  595895 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0929 11:31:19.482955  595895 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0929 11:31:19.507279  595895 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0929 11:31:19.533452  595895 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0929 11:31:19.533481  595895 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0929 11:31:19.580275  595895 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0929 11:31:19.580310  595895 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0929 11:31:19.612191  595895 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0929 11:31:19.612228  595895 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0929 11:31:19.656222  595895 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I0929 11:31:19.656250  595895 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0929 11:31:19.707608  595895 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I0929 11:31:19.720943  595895 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 11:31:19.949642  595895 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0929 11:31:19.949675  595895 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0929 11:31:20.010236  595895 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0929 11:31:20.010269  595895 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0929 11:31:20.143152  595895 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0929 11:31:20.143179  595895 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0929 11:31:20.164194  595895 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0929 11:31:20.164223  595895 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0929 11:31:20.178619  595895 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0929 11:31:20.178652  595895 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0929 11:31:20.352326  595895 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0929 11:31:20.352354  595895 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0929 11:31:20.399905  595895 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0929 11:31:20.399935  595895 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0929 11:31:20.528800  595895 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0929 11:31:20.554026  595895 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0929 11:31:20.608085  595895 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0929 11:31:20.608132  595895 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0929 11:31:20.855879  595895 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0929 11:31:20.901072  595895 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0929 11:31:20.901124  595895 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0929 11:31:21.046874  595895 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0929 11:31:21.046903  595895 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0929 11:31:21.279957  595895 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0929 11:31:21.279985  595895 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0929 11:31:21.494633  595895 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0929 11:31:21.494662  595895 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0929 11:31:21.896279  595895 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0929 11:31:22.355612  595895 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0929 11:31:22.355644  595895 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0929 11:31:23.136046  595895 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0929 11:31:23.136083  595895 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0929 11:31:23.742895  595895 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0929 11:31:23.742921  595895 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0929 11:31:24.397559  595895 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0929 11:31:24.397588  595895 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0929 11:31:24.806696  595895 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0929 11:31:24.806729  595895 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0929 11:31:25.028630  595895 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0929 11:31:25.028675  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHHostname
	I0929 11:31:25.032868  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:31:25.033494  595895 main.go:141] libmachine: (addons-214441) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:9c:d8", ip: ""} in network mk-addons-214441: {Iface:virbr1 ExpiryTime:2025-09-29 12:30:43 +0000 UTC Type:0 Mac:52:54:00:98:9c:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:addons-214441 Clientid:01:52:54:00:98:9c:d8}
	I0929 11:31:25.033526  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined IP address 192.168.39.76 and MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:31:25.033760  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHPort
	I0929 11:31:25.034027  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHKeyPath
	I0929 11:31:25.034259  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHUsername
	I0929 11:31:25.034422  595895 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21654-591397/.minikube/machines/addons-214441/id_rsa Username:docker}
	I0929 11:31:25.610330  595895 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0929 11:31:25.954809  595895 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0929 11:31:26.260607  595895 addons.go:238] Setting addon gcp-auth=true in "addons-214441"
	I0929 11:31:26.260695  595895 host.go:66] Checking if "addons-214441" exists ...
	I0929 11:31:26.261024  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:31:26.261068  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:31:26.276135  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36421
	I0929 11:31:26.276726  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:31:26.277323  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:31:26.277354  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:31:26.277924  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:31:26.278456  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:31:26.278490  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:31:26.293277  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40243
	I0929 11:31:26.293786  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:31:26.294319  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:31:26.294344  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:31:26.294858  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:31:26.295136  595895 main.go:141] libmachine: (addons-214441) Calling .GetState
	I0929 11:31:26.297279  595895 main.go:141] libmachine: (addons-214441) Calling .DriverName
	I0929 11:31:26.297583  595895 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0929 11:31:26.297612  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHHostname
	I0929 11:31:26.301409  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:31:26.302065  595895 main.go:141] libmachine: (addons-214441) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:9c:d8", ip: ""} in network mk-addons-214441: {Iface:virbr1 ExpiryTime:2025-09-29 12:30:43 +0000 UTC Type:0 Mac:52:54:00:98:9c:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:addons-214441 Clientid:01:52:54:00:98:9c:d8}
	I0929 11:31:26.302093  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined IP address 192.168.39.76 and MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:31:26.302272  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHPort
	I0929 11:31:26.302474  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHKeyPath
	I0929 11:31:26.302636  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHUsername
	I0929 11:31:26.302830  595895 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21654-591397/.minikube/machines/addons-214441/id_rsa Username:docker}
	I0929 11:31:26.648618  595895 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.65053686s)
	I0929 11:31:26.648643  595895 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (7.628556534s)
	I0929 11:31:26.648693  595895 main.go:141] libmachine: Making call to close driver server
	I0929 11:31:26.648703  595895 main.go:141] libmachine: (addons-214441) Calling .Close
	I0929 11:31:26.648707  595895 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (7.628486823s)
	I0929 11:31:26.648740  595895 start.go:976] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0929 11:31:26.648855  595895 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (7.601423652s)
	I0929 11:31:26.648889  595895 main.go:141] libmachine: Making call to close driver server
	I0929 11:31:26.648898  595895 main.go:141] libmachine: (addons-214441) Calling .Close
	I0929 11:31:26.649041  595895 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:31:26.649056  595895 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:31:26.649066  595895 main.go:141] libmachine: Making call to close driver server
	I0929 11:31:26.649073  595895 main.go:141] libmachine: (addons-214441) Calling .Close
	I0929 11:31:26.649181  595895 main.go:141] libmachine: (addons-214441) DBG | Closing plugin on server side
	I0929 11:31:26.649214  595895 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:31:26.649225  595895 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:31:26.649256  595895 main.go:141] libmachine: Making call to close driver server
	I0929 11:31:26.649265  595895 main.go:141] libmachine: (addons-214441) Calling .Close
	I0929 11:31:26.649555  595895 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:31:26.649585  595895 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:31:26.649698  595895 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:31:26.649728  595895 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:31:26.649741  595895 node_ready.go:35] waiting up to 6m0s for node "addons-214441" to be "Ready" ...
	I0929 11:31:26.649625  595895 main.go:141] libmachine: (addons-214441) DBG | Closing plugin on server side
	I0929 11:31:26.649665  595895 main.go:141] libmachine: (addons-214441) DBG | Closing plugin on server side
	I0929 11:31:26.797678  595895 node_ready.go:49] node "addons-214441" is "Ready"
	I0929 11:31:26.797712  595895 node_ready.go:38] duration metric: took 147.94134ms for node "addons-214441" to be "Ready" ...
	I0929 11:31:26.797735  595895 api_server.go:52] waiting for apiserver process to appear ...
	I0929 11:31:26.797797  595895 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0929 11:31:27.078868  595895 main.go:141] libmachine: Making call to close driver server
	I0929 11:31:27.078896  595895 main.go:141] libmachine: (addons-214441) Calling .Close
	I0929 11:31:27.079284  595895 main.go:141] libmachine: (addons-214441) DBG | Closing plugin on server side
	I0929 11:31:27.079351  595895 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:31:27.079372  595895 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:31:27.220384  595895 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-214441" context rescaled to 1 replicas
	I0929 11:31:30.522194  595895 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (11.34358993s)
	I0929 11:31:30.522263  595895 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (11.327765304s)
	I0929 11:31:30.522284  595895 main.go:141] libmachine: Making call to close driver server
	I0929 11:31:30.522297  595895 main.go:141] libmachine: (addons-214441) Calling .Close
	I0929 11:31:30.522297  595895 main.go:141] libmachine: Making call to close driver server
	I0929 11:31:30.522308  595895 main.go:141] libmachine: (addons-214441) Calling .Close
	I0929 11:31:30.522336  595895 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (11.318803941s)
	I0929 11:31:30.522386  595895 main.go:141] libmachine: Making call to close driver server
	I0929 11:31:30.522398  595895 main.go:141] libmachine: (addons-214441) Calling .Close
	I0929 11:31:30.522641  595895 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:31:30.522658  595895 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:31:30.522685  595895 main.go:141] libmachine: Making call to close driver server
	I0929 11:31:30.522695  595895 main.go:141] libmachine: (addons-214441) Calling .Close
	I0929 11:31:30.522794  595895 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:31:30.522804  595895 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:31:30.522813  595895 main.go:141] libmachine: Making call to close driver server
	I0929 11:31:30.522819  595895 main.go:141] libmachine: (addons-214441) Calling .Close
	I0929 11:31:30.522874  595895 main.go:141] libmachine: (addons-214441) DBG | Closing plugin on server side
	I0929 11:31:30.522863  595895 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:31:30.522905  595895 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:31:30.522914  595895 main.go:141] libmachine: Making call to close driver server
	I0929 11:31:30.522922  595895 main.go:141] libmachine: (addons-214441) Calling .Close
	I0929 11:31:30.522952  595895 main.go:141] libmachine: (addons-214441) DBG | Closing plugin on server side
	I0929 11:31:30.522984  595895 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:31:30.522990  595895 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:31:30.523183  595895 main.go:141] libmachine: (addons-214441) DBG | Closing plugin on server side
	I0929 11:31:30.523188  595895 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:31:30.523205  595895 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:31:30.523212  595895 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:31:30.523216  595895 addons.go:479] Verifying addon ingress=true in "addons-214441"
	I0929 11:31:30.523222  595895 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:31:30.527182  595895 out.go:179] * Verifying ingress addon...
	I0929 11:31:30.529738  595895 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0929 11:31:30.708830  595895 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0929 11:31:30.708859  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:31.235125  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:31.629964  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:32.068126  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:32.586294  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:33.055440  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:33.661344  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:33.865322  595895 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (14.640641229s)
	I0929 11:31:33.865361  595895 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (14.526214451s)
	I0929 11:31:33.865396  595895 main.go:141] libmachine: Making call to close driver server
	I0929 11:31:33.865407  595895 main.go:141] libmachine: Making call to close driver server
	I0929 11:31:33.865413  595895 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (14.382417731s)
	I0929 11:31:33.865425  595895 main.go:141] libmachine: (addons-214441) Calling .Close
	I0929 11:31:33.865448  595895 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (14.358144157s)
	I0929 11:31:33.865456  595895 main.go:141] libmachine: Making call to close driver server
	I0929 11:31:33.865470  595895 main.go:141] libmachine: (addons-214441) Calling .Close
	I0929 11:31:33.865472  595895 main.go:141] libmachine: Making call to close driver server
	I0929 11:31:33.865527  595895 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (14.157883934s)
	I0929 11:31:33.865528  595895 main.go:141] libmachine: (addons-214441) Calling .Close
	I0929 11:31:33.865545  595895 main.go:141] libmachine: Making call to close driver server
	I0929 11:31:33.865554  595895 main.go:141] libmachine: (addons-214441) Calling .Close
	I0929 11:31:33.865410  595895 main.go:141] libmachine: (addons-214441) Calling .Close
	I0929 11:31:33.865659  595895 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (14.144676501s)
	W0929 11:31:33.865707  595895 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:31:33.865740  595895 retry.go:31] will retry after 127.952259ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:31:33.865790  595895 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (13.336965067s)
	I0929 11:31:33.865796  595895 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:31:33.865807  595895 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:31:33.865810  595895 main.go:141] libmachine: Making call to close driver server
	I0929 11:31:33.865818  595895 main.go:141] libmachine: Making call to close driver server
	I0929 11:31:33.865821  595895 main.go:141] libmachine: (addons-214441) Calling .Close
	I0929 11:31:33.865826  595895 main.go:141] libmachine: (addons-214441) Calling .Close
	I0929 11:31:33.865864  595895 main.go:141] libmachine: (addons-214441) DBG | Closing plugin on server side
	I0929 11:31:33.865883  595895 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:31:33.865895  595895 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:31:33.865895  595895 main.go:141] libmachine: (addons-214441) DBG | Closing plugin on server side
	I0929 11:31:33.865906  595895 main.go:141] libmachine: Making call to close driver server
	I0929 11:31:33.865922  595895 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:31:33.865928  595895 main.go:141] libmachine: (addons-214441) Calling .Close
	I0929 11:31:33.865931  595895 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:31:33.865939  595895 main.go:141] libmachine: Making call to close driver server
	I0929 11:31:33.865945  595895 main.go:141] libmachine: (addons-214441) Calling .Close
	I0929 11:31:33.865960  595895 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (13.311901558s)
	I0929 11:31:33.865978  595895 main.go:141] libmachine: Making call to close driver server
	I0929 11:31:33.865986  595895 main.go:141] libmachine: (addons-214441) Calling .Close
	I0929 11:31:33.866077  595895 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (13.010152282s)
	I0929 11:31:33.866096  595895 main.go:141] libmachine: Making call to close driver server
	I0929 11:31:33.866124  595895 main.go:141] libmachine: (addons-214441) Calling .Close
	I0929 11:31:33.866162  595895 main.go:141] libmachine: (addons-214441) DBG | Closing plugin on server side
	I0929 11:31:33.866187  595895 main.go:141] libmachine: (addons-214441) DBG | Closing plugin on server side
	I0929 11:31:33.866214  595895 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:31:33.866223  595895 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:31:33.866230  595895 main.go:141] libmachine: Making call to close driver server
	I0929 11:31:33.866237  595895 main.go:141] libmachine: (addons-214441) Calling .Close
	I0929 11:31:33.866283  595895 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (11.969964695s)
	W0929 11:31:33.866347  595895 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	Warning: unrecognized format "int64"
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0929 11:31:33.866370  595895 retry.go:31] will retry after 213.926415ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	Warning: unrecognized format "int64"
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0929 11:31:33.866587  595895 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:31:33.866618  595895 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:31:33.866622  595895 main.go:141] libmachine: (addons-214441) DBG | Closing plugin on server side
	I0929 11:31:33.866627  595895 main.go:141] libmachine: Making call to close driver server
	I0929 11:31:33.866630  595895 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:31:33.866636  595895 main.go:141] libmachine: (addons-214441) Calling .Close
	I0929 11:31:33.866640  595895 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:31:33.866651  595895 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:31:33.866662  595895 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:31:33.866606  595895 main.go:141] libmachine: (addons-214441) DBG | Closing plugin on server side
	I0929 11:31:33.866736  595895 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:31:33.866752  595895 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:31:33.866766  595895 main.go:141] libmachine: Making call to close driver server
	I0929 11:31:33.866780  595895 main.go:141] libmachine: (addons-214441) Calling .Close
	I0929 11:31:33.866875  595895 main.go:141] libmachine: (addons-214441) DBG | Closing plugin on server side
	I0929 11:31:33.866910  595895 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:31:33.866925  595895 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:31:33.867202  595895 main.go:141] libmachine: (addons-214441) DBG | Closing plugin on server side
	I0929 11:31:33.867264  595895 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:31:33.867284  595895 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:31:33.867303  595895 main.go:141] libmachine: Making call to close driver server
	I0929 11:31:33.867339  595895 main.go:141] libmachine: (addons-214441) Calling .Close
	I0929 11:31:33.867618  595895 main.go:141] libmachine: (addons-214441) DBG | Closing plugin on server side
	I0929 11:31:33.867761  595895 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:31:33.867769  595895 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:31:33.867778  595895 addons.go:479] Verifying addon registry=true in "addons-214441"
	I0929 11:31:33.868269  595895 main.go:141] libmachine: (addons-214441) DBG | Closing plugin on server side
	I0929 11:31:33.868300  595895 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:31:33.868305  595895 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:31:33.868451  595895 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:31:33.868463  595895 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:31:33.868472  595895 main.go:141] libmachine: Making call to close driver server
	I0929 11:31:33.868479  595895 main.go:141] libmachine: (addons-214441) Calling .Close
	I0929 11:31:33.869037  595895 main.go:141] libmachine: (addons-214441) DBG | Closing plugin on server side
	I0929 11:31:33.869070  595895 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:31:33.869076  595895 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:31:33.869084  595895 addons.go:479] Verifying addon metrics-server=true in "addons-214441"
	I0929 11:31:33.869798  595895 main.go:141] libmachine: (addons-214441) DBG | Closing plugin on server side
	I0929 11:31:33.869839  595895 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:31:33.869847  595895 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:31:33.869975  595895 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:31:33.870031  595895 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:31:33.871564  595895 out.go:179] * Verifying registry addon...
	I0929 11:31:33.872479  595895 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-214441 service yakd-dashboard -n yakd-dashboard
	
	I0929 11:31:33.874294  595895 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0929 11:31:33.993863  595895 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0929 11:31:33.993900  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:33.994009  595895 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 11:31:34.081538  595895 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0929 11:31:34.115447  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:34.146570  595895 main.go:141] libmachine: Making call to close driver server
	I0929 11:31:34.146609  595895 main.go:141] libmachine: (addons-214441) Calling .Close
	I0929 11:31:34.146947  595895 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:31:34.146967  595895 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:31:34.413578  595895 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (8.803181451s)
	I0929 11:31:34.413616  595895 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (8.116003731s)
	I0929 11:31:34.413656  595895 main.go:141] libmachine: Making call to close driver server
	I0929 11:31:34.413669  595895 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (7.615843233s)
	I0929 11:31:34.413709  595895 api_server.go:72] duration metric: took 16.972266985s to wait for apiserver process to appear ...
	I0929 11:31:34.413722  595895 api_server.go:88] waiting for apiserver healthz status ...
	I0929 11:31:34.413750  595895 api_server.go:253] Checking apiserver healthz at https://192.168.39.76:8443/healthz ...
	I0929 11:31:34.413675  595895 main.go:141] libmachine: (addons-214441) Calling .Close
	I0929 11:31:34.414213  595895 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:31:34.414230  595895 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:31:34.414254  595895 main.go:141] libmachine: Making call to close driver server
	I0929 11:31:34.414261  595895 main.go:141] libmachine: (addons-214441) Calling .Close
	I0929 11:31:34.414511  595895 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:31:34.414529  595895 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:31:34.414543  595895 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-214441"
	I0929 11:31:34.415286  595895 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I0929 11:31:34.416180  595895 out.go:179] * Verifying csi-hostpath-driver addon...
	I0929 11:31:34.417833  595895 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0929 11:31:34.418933  595895 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0929 11:31:34.419343  595895 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0929 11:31:34.419365  595895 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0929 11:31:34.428017  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:34.435805  595895 api_server.go:279] https://192.168.39.76:8443/healthz returned 200:
	ok
	I0929 11:31:34.443092  595895 api_server.go:141] control plane version: v1.34.0
	I0929 11:31:34.443139  595895 api_server.go:131] duration metric: took 29.409177ms to wait for apiserver health ...
	I0929 11:31:34.443150  595895 system_pods.go:43] waiting for kube-system pods to appear ...
	I0929 11:31:34.495447  595895 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0929 11:31:34.495473  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:34.527406  595895 system_pods.go:59] 20 kube-system pods found
	I0929 11:31:34.527452  595895 system_pods.go:61] "amd-gpu-device-plugin-7jx7f" [97d8a167-36be-44b5-b99c-2b55db99df3e] Running
	I0929 11:31:34.527458  595895 system_pods.go:61] "coredns-66bc5c9577-fkh52" [bd9cc142-39db-4ed9-9ecc-3d270499136c] Running
	I0929 11:31:34.527463  595895 system_pods.go:61] "coredns-66bc5c9577-sgnb2" [343874b8-6c4d-4e36-8ebf-a379e3a93a98] Running
	I0929 11:31:34.527471  595895 system_pods.go:61] "csi-hostpath-attacher-0" [27d9af25-8d41-4c37-9359-c7bd4f88f09f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0929 11:31:34.527475  595895 system_pods.go:61] "csi-hostpath-resizer-0" [4f24e046-d5b2-41ff-a051-b0a572bf9348] Pending
	I0929 11:31:34.527484  595895 system_pods.go:61] "csi-hostpathplugin-8279f" [3cc6db25-521c-4711-9d36-e22ab6d16249] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0929 11:31:34.527490  595895 system_pods.go:61] "etcd-addons-214441" [1710b264-ccec-4054-8a1c-7d8e5ce49163] Running
	I0929 11:31:34.527494  595895 system_pods.go:61] "kube-apiserver-addons-214441" [302fdd61-6c51-4e6e-a1af-7856ccb9f2ca] Running
	I0929 11:31:34.527502  595895 system_pods.go:61] "kube-controller-manager-addons-214441" [3e9c3a44-d14e-4335-a141-f7c648e6159d] Running
	I0929 11:31:34.527507  595895 system_pods.go:61] "kube-ingress-dns-minikube" [12920e9d-debd-4783-9f79-1c3a345e576e] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0929 11:31:34.527513  595895 system_pods.go:61] "kube-proxy-d9fnb" [229af565-3300-4cb4-8289-f3d9b4a9af81] Running
	I0929 11:31:34.527520  595895 system_pods.go:61] "kube-scheduler-addons-214441" [bcaec156-9259-4787-9d02-01a2203b43ac] Running
	I0929 11:31:34.527524  595895 system_pods.go:61] "metrics-server-85b7d694d7-zlrv7" [003bc9b9-be00-4e08-8af7-3443d0502181] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0929 11:31:34.527533  595895 system_pods.go:61] "nvidia-device-plugin-daemonset-x7b8m" [b96ad59d-d30d-4437-a5c7-ce0c5fc69348] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0929 11:31:34.527541  595895 system_pods.go:61] "registry-66898fdd98-d7zx7" [e0282924-ef3d-48eb-8906-ea10f183b39e] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0929 11:31:34.527547  595895 system_pods.go:61] "registry-creds-764b6fb674-td8pw" [c6001777-2f97-49d8-972b-27d373557795] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0929 11:31:34.527557  595895 system_pods.go:61] "registry-proxy-grb7m" [d79830e7-a640-46a7-9b07-38e39aceac96] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0929 11:31:34.527562  595895 system_pods.go:61] "snapshot-controller-7d9fbc56b8-pw4g9" [74b07f42-da97-4ad5-8fdb-748ab5cfbde2] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0929 11:31:34.527571  595895 system_pods.go:61] "snapshot-controller-7d9fbc56b8-wvh2l" [987a4a45-4d77-43ac-816c-ebc08faf51bc] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0929 11:31:34.527575  595895 system_pods.go:61] "storage-provisioner" [e1a79041-edc7-4f8e-96cb-8b3566893a0e] Running
	I0929 11:31:34.527582  595895 system_pods.go:74] duration metric: took 84.42539ms to wait for pod list to return data ...
	I0929 11:31:34.527594  595895 default_sa.go:34] waiting for default service account to be created ...
	I0929 11:31:34.549252  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:34.556947  595895 default_sa.go:45] found service account: "default"
	I0929 11:31:34.556977  595895 default_sa.go:55] duration metric: took 29.376735ms for default service account to be created ...
	I0929 11:31:34.556988  595895 system_pods.go:116] waiting for k8s-apps to be running ...
	I0929 11:31:34.596290  595895 system_pods.go:86] 20 kube-system pods found
	I0929 11:31:34.596322  595895 system_pods.go:89] "amd-gpu-device-plugin-7jx7f" [97d8a167-36be-44b5-b99c-2b55db99df3e] Running
	I0929 11:31:34.596330  595895 system_pods.go:89] "coredns-66bc5c9577-fkh52" [bd9cc142-39db-4ed9-9ecc-3d270499136c] Running
	I0929 11:31:34.596334  595895 system_pods.go:89] "coredns-66bc5c9577-sgnb2" [343874b8-6c4d-4e36-8ebf-a379e3a93a98] Running
	I0929 11:31:34.596343  595895 system_pods.go:89] "csi-hostpath-attacher-0" [27d9af25-8d41-4c37-9359-c7bd4f88f09f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0929 11:31:34.596349  595895 system_pods.go:89] "csi-hostpath-resizer-0" [4f24e046-d5b2-41ff-a051-b0a572bf9348] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0929 11:31:34.596357  595895 system_pods.go:89] "csi-hostpathplugin-8279f" [3cc6db25-521c-4711-9d36-e22ab6d16249] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0929 11:31:34.596361  595895 system_pods.go:89] "etcd-addons-214441" [1710b264-ccec-4054-8a1c-7d8e5ce49163] Running
	I0929 11:31:34.596365  595895 system_pods.go:89] "kube-apiserver-addons-214441" [302fdd61-6c51-4e6e-a1af-7856ccb9f2ca] Running
	I0929 11:31:34.596369  595895 system_pods.go:89] "kube-controller-manager-addons-214441" [3e9c3a44-d14e-4335-a141-f7c648e6159d] Running
	I0929 11:31:34.596375  595895 system_pods.go:89] "kube-ingress-dns-minikube" [12920e9d-debd-4783-9f79-1c3a345e576e] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0929 11:31:34.596381  595895 system_pods.go:89] "kube-proxy-d9fnb" [229af565-3300-4cb4-8289-f3d9b4a9af81] Running
	I0929 11:31:34.596385  595895 system_pods.go:89] "kube-scheduler-addons-214441" [bcaec156-9259-4787-9d02-01a2203b43ac] Running
	I0929 11:31:34.596390  595895 system_pods.go:89] "metrics-server-85b7d694d7-zlrv7" [003bc9b9-be00-4e08-8af7-3443d0502181] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0929 11:31:34.596398  595895 system_pods.go:89] "nvidia-device-plugin-daemonset-x7b8m" [b96ad59d-d30d-4437-a5c7-ce0c5fc69348] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0929 11:31:34.596404  595895 system_pods.go:89] "registry-66898fdd98-d7zx7" [e0282924-ef3d-48eb-8906-ea10f183b39e] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0929 11:31:34.596409  595895 system_pods.go:89] "registry-creds-764b6fb674-td8pw" [c6001777-2f97-49d8-972b-27d373557795] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0929 11:31:34.596413  595895 system_pods.go:89] "registry-proxy-grb7m" [d79830e7-a640-46a7-9b07-38e39aceac96] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0929 11:31:34.596421  595895 system_pods.go:89] "snapshot-controller-7d9fbc56b8-pw4g9" [74b07f42-da97-4ad5-8fdb-748ab5cfbde2] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0929 11:31:34.596427  595895 system_pods.go:89] "snapshot-controller-7d9fbc56b8-wvh2l" [987a4a45-4d77-43ac-816c-ebc08faf51bc] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0929 11:31:34.596430  595895 system_pods.go:89] "storage-provisioner" [e1a79041-edc7-4f8e-96cb-8b3566893a0e] Running
	I0929 11:31:34.596439  595895 system_pods.go:126] duration metric: took 39.444621ms to wait for k8s-apps to be running ...
	I0929 11:31:34.596450  595895 system_svc.go:44] waiting for kubelet service to be running ....
	I0929 11:31:34.596507  595895 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0929 11:31:34.638029  595895 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0929 11:31:34.638063  595895 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0929 11:31:34.896745  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:35.000193  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:35.038316  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:35.057490  595895 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0929 11:31:35.057521  595895 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0929 11:31:35.300242  595895 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0929 11:31:35.379546  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:35.428677  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:35.535091  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:35.881406  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:35.938231  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:36.039311  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:36.382155  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:36.425663  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:36.535684  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:36.886954  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:36.927490  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:37.044975  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:37.382165  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:37.431026  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:37.547302  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:37.920673  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:37.944368  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:38.063651  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:38.330176  595895 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (4.336121933s)
	W0929 11:31:38.330254  595895 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:31:38.330284  595895 retry.go:31] will retry after 312.007159ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:31:38.330290  595895 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.248696545s)
	I0929 11:31:38.330341  595895 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (3.73381029s)
	I0929 11:31:38.330367  595895 system_svc.go:56] duration metric: took 3.733914032s WaitForService to wait for kubelet
	I0929 11:31:38.330377  595895 kubeadm.go:578] duration metric: took 20.888935766s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0929 11:31:38.330403  595895 node_conditions.go:102] verifying NodePressure condition ...
	I0929 11:31:38.330343  595895 main.go:141] libmachine: Making call to close driver server
	I0929 11:31:38.330449  595895 main.go:141] libmachine: (addons-214441) Calling .Close
	I0929 11:31:38.330448  595895 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (3.030164486s)
	I0929 11:31:38.330495  595895 main.go:141] libmachine: Making call to close driver server
	I0929 11:31:38.330509  595895 main.go:141] libmachine: (addons-214441) Calling .Close
	I0929 11:31:38.330817  595895 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:31:38.330832  595895 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:31:38.330841  595895 main.go:141] libmachine: Making call to close driver server
	I0929 11:31:38.330848  595895 main.go:141] libmachine: (addons-214441) Calling .Close
	I0929 11:31:38.330851  595895 main.go:141] libmachine: (addons-214441) DBG | Closing plugin on server side
	I0929 11:31:38.330882  595895 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:31:38.330895  595895 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:31:38.330903  595895 main.go:141] libmachine: Making call to close driver server
	I0929 11:31:38.330910  595895 main.go:141] libmachine: (addons-214441) Calling .Close
	I0929 11:31:38.331221  595895 main.go:141] libmachine: (addons-214441) DBG | Closing plugin on server side
	I0929 11:31:38.331223  595895 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:31:38.331238  595895 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:31:38.331251  595895 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:31:38.331258  595895 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:31:38.332465  595895 addons.go:479] Verifying addon gcp-auth=true in "addons-214441"
	I0929 11:31:38.334695  595895 out.go:179] * Verifying gcp-auth addon...
	I0929 11:31:38.336858  595895 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0929 11:31:38.341614  595895 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0929 11:31:38.341645  595895 node_conditions.go:123] node cpu capacity is 2
	I0929 11:31:38.341662  595895 node_conditions.go:105] duration metric: took 11.25287ms to run NodePressure ...
	I0929 11:31:38.341688  595895 start.go:241] waiting for startup goroutines ...
	I0929 11:31:38.343873  595895 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0929 11:31:38.343896  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:38.381193  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:38.423947  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:38.537472  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:38.642514  595895 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 11:31:38.843272  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:38.944959  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:38.945123  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:39.033029  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:39.342350  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:39.380435  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:39.424230  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:39.537307  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:39.645310  595895 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.002737784s)
	W0929 11:31:39.645357  595895 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:31:39.645385  595895 retry.go:31] will retry after 298.904966ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:31:39.841477  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:39.879072  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:39.922915  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:39.945025  595895 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 11:31:40.034681  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:40.343272  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:40.382403  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:40.422942  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:40.539442  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:40.844610  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:40.879893  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:40.924951  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:41.033826  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:41.124246  595895 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.179166796s)
	W0929 11:31:41.124315  595895 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:31:41.124339  595895 retry.go:31] will retry after 649.538473ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:31:41.343005  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:41.380641  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:41.425734  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:41.533709  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:41.774560  595895 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 11:31:41.841236  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:41.878527  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:41.924650  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:42.035789  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:42.342468  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:42.380731  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:42.426156  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:42.534471  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:42.785912  595895 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.011289133s)
	W0929 11:31:42.785977  595895 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:31:42.786005  595895 retry.go:31] will retry after 983.289132ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:31:42.842132  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:42.879170  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:42.924415  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:43.036251  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:43.343664  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:43.382521  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:43.423598  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:43.534301  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:43.770317  595895 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 11:31:43.843700  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:43.880339  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:43.925260  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:44.035702  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:44.342152  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:44.380186  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:44.427570  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:44.537930  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:44.812756  595895 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.042397237s)
	W0929 11:31:44.812812  595895 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:31:44.812836  595895 retry.go:31] will retry after 2.137947671s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:31:44.843045  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:44.881899  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:44.924762  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:45.035718  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:45.343550  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:45.378897  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:45.424866  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:45.534338  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:45.841433  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:45.877671  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:45.923645  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:46.034379  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:46.372337  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:46.406356  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:46.426866  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:46.534032  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:46.842343  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:46.879578  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:46.925175  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:46.951146  595895 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 11:31:47.034343  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:47.344240  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:47.382773  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:47.424668  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:47.540037  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:47.843427  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:47.879391  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:47.924262  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:47.960092  595895 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.008893629s)
	W0929 11:31:47.960177  595895 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:31:47.960206  595895 retry.go:31] will retry after 2.504757299s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:31:48.033591  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:48.341481  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:48.378697  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:48.424514  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:48.536592  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:48.842185  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:48.879742  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:48.923614  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:49.034098  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:49.340781  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:49.379506  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:49.423231  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:49.534207  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:49.842436  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:49.877896  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:49.924231  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:50.034614  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:50.341556  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:50.379007  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:50.423685  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:50.465827  595895 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 11:31:50.536792  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:50.843824  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:50.879454  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:50.924711  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:51.035609  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:51.343958  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:51.379841  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:51.424239  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:51.468054  595895 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.002171892s)
	W0929 11:31:51.468114  595895 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:31:51.468140  595895 retry.go:31] will retry after 5.613548218s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:31:51.533585  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:51.963029  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:51.963886  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:51.964026  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:52.060713  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:52.343223  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:52.378836  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:52.424767  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:52.534427  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:52.849585  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:52.879670  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:52.948684  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:53.048366  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:53.346453  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:53.380741  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:53.426760  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:53.533978  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:53.840987  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:53.879766  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:53.924223  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:54.035753  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:54.342742  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:54.378763  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:54.423439  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:54.535260  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:54.841859  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:54.880183  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:54.925299  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:55.033854  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:55.340853  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:55.378822  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:55.424172  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:55.534313  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:55.842189  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:55.879647  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:55.925521  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:56.034145  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:56.341524  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:56.384803  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:56.424070  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:56.533658  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:56.845007  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:56.881917  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:56.944166  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:57.044730  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:57.082647  595895 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 11:31:57.345840  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:57.379131  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:57.425387  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:57.534328  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:57.843711  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:57.879327  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:57.925624  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:58.038058  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:58.345139  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:58.379479  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:58.427479  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:58.431242  595895 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.348544969s)
	W0929 11:31:58.431293  595895 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:31:58.431314  595895 retry.go:31] will retry after 5.599503168s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:31:58.535825  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:58.841717  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:58.878293  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:58.926559  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:59.035878  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:59.341486  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:59.381532  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:59.425077  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:59.532752  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:59.841172  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:59.878180  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:59.923096  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:00.034481  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:00.557941  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:00.559858  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:32:00.559963  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:00.560670  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:00.841990  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:00.879357  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:32:00.926097  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:01.036394  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:01.344642  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:01.379875  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:32:01.425784  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:01.534466  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:01.842499  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:01.878243  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:32:01.924047  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:02.033958  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:02.342377  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:02.380154  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:32:02.423813  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:02.535090  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:02.843862  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:02.879556  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:32:02.924521  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:03.034759  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:03.340099  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:03.378625  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:32:03.423534  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:03.534511  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:03.841201  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:03.878471  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:32:03.924393  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:04.031608  595895 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 11:32:04.037031  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:04.344499  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:04.378709  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:32:04.426297  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:04.536239  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:04.842255  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:04.878783  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:32:04.925876  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:05.037628  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:05.250099  595895 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.218439403s)
	W0929 11:32:05.250163  595895 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:32:05.250186  595895 retry.go:31] will retry after 6.3969875s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:32:05.342875  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:05.380683  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:32:05.424490  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:05.534483  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:05.841804  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:05.880284  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:32:05.923385  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:06.034868  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:06.341952  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:06.378384  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:32:06.426408  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:06.535793  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:06.842154  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:06.880699  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:32:06.924358  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:07.035474  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:07.343686  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:07.378323  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:32:07.423762  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:07.535390  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:07.843851  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:07.881716  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:32:07.927684  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:08.037583  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:08.341340  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:08.380517  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:32:08.424488  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:08.535292  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:08.841002  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:08.879020  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:32:08.924253  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:09.089297  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:09.340800  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:09.377819  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:32:09.423823  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:09.534297  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:09.849243  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:09.950172  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:32:09.950267  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:10.036059  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:10.346922  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:10.379976  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:32:10.424634  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:10.538864  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:10.842015  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:10.879192  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:32:10.925328  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:11.040957  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:11.349029  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:11.380885  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:32:11.452716  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:11.533526  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:11.648223  595895 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 11:32:11.846882  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:11.881994  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:32:11.924898  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:12.037323  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:12.342006  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:12.378476  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:32:12.425404  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:12.544040  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:12.792386  595895 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.144111976s)
	W0929 11:32:12.792447  595895 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:32:12.792475  595895 retry.go:31] will retry after 13.411476283s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:32:12.842021  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:12.880179  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:32:12.924788  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:13.040328  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:13.342434  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:13.378229  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:32:13.423792  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:13.533728  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:13.843276  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:13.881114  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:32:13.924958  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:14.034759  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:14.342679  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:14.391569  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:32:14.496903  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:14.537421  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:14.843175  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:14.880166  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:32:14.923743  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:15.033994  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:15.343313  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:15.378881  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:32:15.423448  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:15.538003  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:15.845026  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:15.879663  595895 kapi.go:107] duration metric: took 42.005359357s to wait for kubernetes.io/minikube-addons=registry ...
	I0929 11:32:15.924537  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:16.034645  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:16.341847  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:16.423671  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:16.542699  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:16.844239  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:16.931285  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:17.038278  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:17.353396  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:17.429078  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:17.543634  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:17.844298  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:17.946425  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:18.041877  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:18.345833  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:18.428431  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:18.540908  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:18.840650  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:18.941953  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:19.044517  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:19.341978  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:19.424948  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:19.534807  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:19.839721  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:19.923994  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:20.033049  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:20.342737  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:20.425291  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:20.540624  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:20.844143  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:20.923381  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:21.034820  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:21.343509  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:21.423753  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:21.533929  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:21.841334  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:21.923232  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:22.035002  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:22.630689  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:22.632895  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:22.632941  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:22.845479  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:22.926876  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:23.038229  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:23.355255  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:23.427225  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:23.538625  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:23.844878  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:23.934777  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:24.035280  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:24.346419  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:24.423729  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:24.534589  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:24.842134  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:24.923902  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:25.034892  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:25.362314  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:25.488458  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:25.587385  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:25.861373  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:25.929934  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:26.034355  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:26.204639  595895 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 11:32:26.361386  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:26.429512  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:26.537022  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:26.843446  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:26.926054  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:27.035634  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:27.344336  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:27.424901  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:27.537642  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:27.644135  595895 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.439429306s)
	W0929 11:32:27.644198  595895 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:32:27.644227  595895 retry.go:31] will retry after 29.327619656s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:32:27.842768  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:27.923415  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:28.034767  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:28.343738  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:28.445503  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:28.546159  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:28.851845  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:28.927009  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:29.033400  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:29.341998  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:29.426197  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:29.537012  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:29.842012  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:29.924188  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:30.034037  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:30.346865  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:30.430853  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:30.542769  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:30.842367  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:30.922904  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:31.033768  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:31.341881  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:31.425338  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:31.535963  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:31.844006  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:31.924398  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:32.034705  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:32.346065  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:32.423672  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:32.534377  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:32.842447  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:32.925931  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:33.034800  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:33.387960  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:33.429171  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:33.546901  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:33.852519  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:33.953288  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:34.035154  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:34.344025  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:34.431259  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:34.536600  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:34.843653  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:34.927609  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:35.036794  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:35.341408  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:35.425312  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:35.541227  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:35.847181  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:35.947699  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:36.035760  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:36.344915  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:36.424144  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:36.535593  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:36.841859  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:36.924975  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:37.037919  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:37.452583  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:37.459370  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:37.537236  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:37.841013  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:37.923280  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:38.036969  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:38.340515  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:38.425769  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:38.549235  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:38.842439  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:38.925062  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:39.035751  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:39.341398  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:39.422778  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:39.534951  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:39.841870  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:39.925988  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:40.034408  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:40.340654  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:40.424350  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:40.535075  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:40.843236  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:40.924921  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:41.034406  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:41.497913  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:41.499293  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:41.535243  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:41.844020  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:41.923065  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:42.045660  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:42.342026  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:42.426493  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:42.535570  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:42.841485  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:42.923010  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:43.039027  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:43.346733  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:43.432195  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:43.540145  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:43.885089  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:43.972714  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:44.068027  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:44.345507  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:44.427061  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:44.535862  595895 kapi.go:107] duration metric: took 1m14.00612311s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0929 11:32:44.842493  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:44.929592  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:45.347246  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:45.424028  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:45.841905  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:45.923701  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:46.347078  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:46.425229  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:46.845817  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:46.925006  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:47.341259  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:47.426132  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:47.845143  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:47.924205  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:48.349502  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:48.452604  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:48.846442  595895 kapi.go:107] duration metric: took 1m10.509578031s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0929 11:32:48.847867  595895 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-214441 cluster.
	I0929 11:32:48.849227  595895 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0929 11:32:48.850374  595895 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0929 11:32:48.946549  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:49.426824  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:49.927802  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:50.426120  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:50.925871  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:51.426655  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:51.927170  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:52.426213  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:52.923791  595895 kapi.go:107] duration metric: took 1m18.504852087s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0929 11:32:56.972597  595895 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W0929 11:32:57.723998  595895 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:32:57.724041  595895 retry.go:31] will retry after 18.741816746s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:33:16.468501  595895 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W0929 11:33:17.218683  595895 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:33:17.218783  595895 main.go:141] libmachine: Making call to close driver server
	I0929 11:33:17.218797  595895 main.go:141] libmachine: (addons-214441) Calling .Close
	I0929 11:33:17.219140  595895 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:33:17.219161  595895 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:33:17.219172  595895 main.go:141] libmachine: Making call to close driver server
	I0929 11:33:17.219180  595895 main.go:141] libmachine: (addons-214441) Calling .Close
	I0929 11:33:17.219203  595895 main.go:141] libmachine: (addons-214441) DBG | Closing plugin on server side
	I0929 11:33:17.219480  595895 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:33:17.219502  595895 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:33:17.219534  595895 main.go:141] libmachine: (addons-214441) DBG | Closing plugin on server side
	W0929 11:33:17.219634  595895 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I0929 11:33:17.221637  595895 out.go:179] * Enabled addons: ingress-dns, storage-provisioner-rancher, storage-provisioner, cloud-spanner, volcano, amd-gpu-device-plugin, metrics-server, registry-creds, nvidia-device-plugin, yakd, default-storageclass, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0929 11:33:17.223007  595895 addons.go:514] duration metric: took 1m59.781528816s for enable addons: enabled=[ingress-dns storage-provisioner-rancher storage-provisioner cloud-spanner volcano amd-gpu-device-plugin metrics-server registry-creds nvidia-device-plugin yakd default-storageclass volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0929 11:33:17.223046  595895 start.go:246] waiting for cluster config update ...
	I0929 11:33:17.223066  595895 start.go:255] writing updated cluster config ...
	I0929 11:33:17.223379  595895 ssh_runner.go:195] Run: rm -f paused
	I0929 11:33:17.229885  595895 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0929 11:33:17.234611  595895 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-fkh52" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 11:33:17.240669  595895 pod_ready.go:94] pod "coredns-66bc5c9577-fkh52" is "Ready"
	I0929 11:33:17.240694  595895 pod_ready.go:86] duration metric: took 6.057488ms for pod "coredns-66bc5c9577-fkh52" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 11:33:17.243134  595895 pod_ready.go:83] waiting for pod "etcd-addons-214441" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 11:33:17.248977  595895 pod_ready.go:94] pod "etcd-addons-214441" is "Ready"
	I0929 11:33:17.249003  595895 pod_ready.go:86] duration metric: took 5.848678ms for pod "etcd-addons-214441" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 11:33:17.251694  595895 pod_ready.go:83] waiting for pod "kube-apiserver-addons-214441" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 11:33:17.257270  595895 pod_ready.go:94] pod "kube-apiserver-addons-214441" is "Ready"
	I0929 11:33:17.257299  595895 pod_ready.go:86] duration metric: took 5.583626ms for pod "kube-apiserver-addons-214441" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 11:33:17.259585  595895 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-214441" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 11:33:17.635253  595895 pod_ready.go:94] pod "kube-controller-manager-addons-214441" is "Ready"
	I0929 11:33:17.635287  595895 pod_ready.go:86] duration metric: took 375.675116ms for pod "kube-controller-manager-addons-214441" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 11:33:17.834921  595895 pod_ready.go:83] waiting for pod "kube-proxy-d9fnb" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 11:33:18.234706  595895 pod_ready.go:94] pod "kube-proxy-d9fnb" is "Ready"
	I0929 11:33:18.234735  595895 pod_ready.go:86] duration metric: took 399.786159ms for pod "kube-proxy-d9fnb" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 11:33:18.435590  595895 pod_ready.go:83] waiting for pod "kube-scheduler-addons-214441" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 11:33:18.834304  595895 pod_ready.go:94] pod "kube-scheduler-addons-214441" is "Ready"
	I0929 11:33:18.834340  595895 pod_ready.go:86] duration metric: took 398.719914ms for pod "kube-scheduler-addons-214441" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 11:33:18.834353  595895 pod_ready.go:40] duration metric: took 1.60442513s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0929 11:33:18.881427  595895 start.go:623] kubectl: 1.34.1, cluster: 1.34.0 (minor skew: 0)
	I0929 11:33:18.883901  595895 out.go:179] * Done! kubectl is now configured to use "addons-214441" cluster and "default" namespace by default
	
	
	==> Docker <==
	Sep 29 11:41:34 addons-214441 dockerd[1525]: time="2025-09-29T11:41:34.169371020Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 11:41:41 addons-214441 dockerd[1525]: time="2025-09-29T11:41:41.068354920Z" level=warning msg="reference for unknown type: " digest="sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" remote="docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Sep 29 11:41:41 addons-214441 dockerd[1525]: time="2025-09-29T11:41:41.109169619Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 11:42:13 addons-214441 dockerd[1525]: time="2025-09-29T11:42:13.068741582Z" level=info msg="ignoring event" container=db70a778966bef3f17d5b743f34cb7a98a6a579c7be47c15f7dae885b25a4b1b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 29 11:42:14 addons-214441 dockerd[1525]: time="2025-09-29T11:42:14.242914506Z" level=info msg="ignoring event" container=c8b3e1c8b1ffdce105f2d4b1845989f032f14be6ab336366dcc8033cf1a26d29 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 29 11:42:23 addons-214441 dockerd[1525]: time="2025-09-29T11:42:23.722809689Z" level=info msg="ignoring event" container=e49c7022a687d90274d49ab656c59cf493c842020f90a64d91fbd859998a3b4d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 29 11:42:23 addons-214441 dockerd[1525]: time="2025-09-29T11:42:23.906528908Z" level=info msg="ignoring event" container=6c19c08a0c4b016f5ddf2b637ff411e873f5b82bd9522d934341ed0df582d7d9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 29 11:42:29 addons-214441 cri-dockerd[1389]: time="2025-09-29T11:42:29Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/be46dd12a568554d1b475c6c260164613702e2f5fa7bda6b80cac94904a8502c/resolv.conf as [nameserver 10.96.0.10 search local-path-storage.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Sep 29 11:42:29 addons-214441 dockerd[1525]: time="2025-09-29T11:42:29.801095357Z" level=warning msg="reference for unknown type: " digest="sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" remote="docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Sep 29 11:42:29 addons-214441 dockerd[1525]: time="2025-09-29T11:42:29.843576981Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 11:42:43 addons-214441 dockerd[1525]: time="2025-09-29T11:42:43.075566421Z" level=warning msg="reference for unknown type: " digest="sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" remote="docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Sep 29 11:42:43 addons-214441 dockerd[1525]: time="2025-09-29T11:42:43.112765795Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 11:42:45 addons-214441 dockerd[1525]: time="2025-09-29T11:42:45.154840245Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 11:43:03 addons-214441 dockerd[1525]: time="2025-09-29T11:43:03.153401267Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 11:43:09 addons-214441 dockerd[1525]: time="2025-09-29T11:43:09.074862693Z" level=warning msg="reference for unknown type: " digest="sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" remote="docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Sep 29 11:43:09 addons-214441 dockerd[1525]: time="2025-09-29T11:43:09.132638196Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 11:43:57 addons-214441 dockerd[1525]: time="2025-09-29T11:43:57.074866581Z" level=warning msg="reference for unknown type: " digest="sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" remote="docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Sep 29 11:43:57 addons-214441 dockerd[1525]: time="2025-09-29T11:43:57.185791327Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 11:43:57 addons-214441 cri-dockerd[1389]: time="2025-09-29T11:43:57Z" level=info msg="Stop pulling image docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79: docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79: Pulling from library/busybox"
	Sep 29 11:44:29 addons-214441 dockerd[1525]: time="2025-09-29T11:44:29.855693553Z" level=info msg="ignoring event" container=be46dd12a568554d1b475c6c260164613702e2f5fa7bda6b80cac94904a8502c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 29 11:45:00 addons-214441 cri-dockerd[1389]: time="2025-09-29T11:45:00Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/ec4ac1c4a59a99b911940e7471fd4d62bd648ddf20b864c871d76c778232c25f/resolv.conf as [nameserver 10.96.0.10 search local-path-storage.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Sep 29 11:45:00 addons-214441 dockerd[1525]: time="2025-09-29T11:45:00.392898188Z" level=warning msg="reference for unknown type: " digest="sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" remote="docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Sep 29 11:45:00 addons-214441 dockerd[1525]: time="2025-09-29T11:45:00.436740281Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 11:45:12 addons-214441 dockerd[1525]: time="2025-09-29T11:45:12.090833631Z" level=warning msg="reference for unknown type: " digest="sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" remote="docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Sep 29 11:45:12 addons-214441 dockerd[1525]: time="2025-09-29T11:45:12.136853848Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	8f0982c238973       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                                          5 minutes ago       Running             busybox                                  0                   66bafac6b9afb       busybox
	af544573fc0a7       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          12 minutes ago      Running             csi-snapshotter                          0                   02a7d350b8353       csi-hostpathplugin-8279f
	0ce41bd4faa5b       registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8                          12 minutes ago      Running             csi-provisioner                          0                   02a7d350b8353       csi-hostpathplugin-8279f
	a8b5f59d15a16       registry.k8s.io/sig-storage/livenessprobe@sha256:cacee2b5c36dd59d4c7e8469c05c9e4ef53ecb2df9025fa8c10cdaf61bce62f0                            12 minutes ago      Running             liveness-probe                           0                   02a7d350b8353       csi-hostpathplugin-8279f
	2514173d96a26       registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5                           12 minutes ago      Running             hostpath                                 0                   02a7d350b8353       csi-hostpathplugin-8279f
	9b5cb54a94a47       registry.k8s.io/ingress-nginx/controller@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef                             12 minutes ago      Running             controller                               0                   8b83af6a32772       ingress-nginx-controller-9cc49f96f-h99dj
	ef4f6e22ce31a       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:f1c25991bac2fbb7f5fcf91ed9438df31e30edee6bed5a780464238aa09ad24c                12 minutes ago      Running             node-driver-registrar                    0                   02a7d350b8353       csi-hostpathplugin-8279f
	5810f70edf860       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:80b9ba94aa2afe24553d69bd165a6a51552d1582d68618ec00d3b804a7d9193c   12 minutes ago      Running             csi-external-health-monitor-controller   0                   02a7d350b8353       csi-hostpathplugin-8279f
	51f0c139f4f77       registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7                              12 minutes ago      Running             csi-resizer                              0                   9e3b6780764f8       csi-hostpath-resizer-0
	e02a58717cc7c       registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b                             12 minutes ago      Running             csi-attacher                             0                   00ac4103d1658       csi-hostpath-attacher-0
	e805d753e363a       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      12 minutes ago      Running             volume-snapshot-controller               0                   5ef4f58a4b6da       snapshot-controller-7d9fbc56b8-pw4g9
	868179ee6252a       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      12 minutes ago      Running             volume-snapshot-controller               0                   34844f808604d       snapshot-controller-7d9fbc56b8-wvh2l
	30d73d85a386c       8c217da6734db                                                                                                                                12 minutes ago      Exited              patch                                    1                   63ec050554699       ingress-nginx-admission-patch-tp6tp
	4182ff3d1e473       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24                   12 minutes ago      Exited              create                                   0                   f519da4bfec27       ingress-nginx-admission-create-s6nvq
	220ba84adaccb       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:66fdf18cc8a577423b2a36b96a5be40fe690fdb986bfe7875f54edfa9c7d19a5                            12 minutes ago      Running             gadget                                   0                   95e2903b29637       gadget-xvvvf
	31302c4317135       rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246                                       12 minutes ago      Running             local-path-provisioner                   0                   621898582dfa1       local-path-provisioner-648f6765c9-fq5l2
	48adb1b2452be       kicbase/minikube-ingress-dns@sha256:d7c3fd25a0ea8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89                                         13 minutes ago      Running             minikube-ingress-dns                     0                   3ce8cc04a57f5       kube-ingress-dns-minikube
	388ea771a1c89       6e38f40d628db                                                                                                                                13 minutes ago      Running             storage-provisioner                      0                   a451536f2a3ae       storage-provisioner
	ef7f4d809a410       rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                                               13 minutes ago      Running             amd-gpu-device-plugin                    0                   efbec0257280a       amd-gpu-device-plugin-7jx7f
	5629c377b6053       52546a367cc9e                                                                                                                                13 minutes ago      Running             coredns                                  0                   b6c342cfbd0e9       coredns-66bc5c9577-fkh52
	cf32cea215063       df0860106674d                                                                                                                                13 minutes ago      Running             kube-proxy                               0                   164bb1f35fdbf       kube-proxy-d9fnb
	1b712309a5901       46169d968e920                                                                                                                                14 minutes ago      Running             kube-scheduler                           0                   16368e958b541       kube-scheduler-addons-214441
	5df8c088591fb       5f1f5298c888d                                                                                                                                14 minutes ago      Running             etcd                                     0                   0a4ad14786721       etcd-addons-214441
	b5368f01fa760       90550c43ad2bc                                                                                                                                14 minutes ago      Running             kube-apiserver                           0                   47b3b468b3308       kube-apiserver-addons-214441
	b7a56dc83eb1d       a0af72f2ec6d6                                                                                                                                14 minutes ago      Running             kube-controller-manager                  0                   8a7efdf44079d       kube-controller-manager-addons-214441
	
	
	==> controller_ingress [9b5cb54a94a4] <==
	I0929 11:32:45.021197       7 controller.go:240] "Initial sync, sleeping for 1 second"
	I0929 11:32:45.021384       7 event.go:377] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-9cc49f96f-h99dj", UID:"2bde8bfa-47f0-48da-9e63-cd8e2a0a38c6", APIVersion:"v1", ResourceVersion:"784", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	I0929 11:32:45.037639       7 status.go:224] "POD is not ready" pod="ingress-nginx/ingress-nginx-controller-9cc49f96f-h99dj" node="addons-214441"
	W0929 11:39:51.373839       7 controller.go:1126] Error obtaining Endpoints for Service "default/nginx": no object matching key "default/nginx" in local store
	I0929 11:39:51.377315       7 main.go:107] "successfully validated configuration, accepting" ingress="default/nginx-ingress"
	I0929 11:39:51.383910       7 store.go:443] "Found valid IngressClass" ingress="default/nginx-ingress" ingressclass="nginx"
	W0929 11:39:51.384731       7 controller.go:1126] Error obtaining Endpoints for Service "default/nginx": no object matching key "default/nginx" in local store
	I0929 11:39:51.386972       7 controller.go:214] "Configuration changes detected, backend reload required"
	I0929 11:39:51.388223       7 event.go:377] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"nginx-ingress", UID:"6c60e7a0-fa15-408e-810a-a4af1c88fe08", APIVersion:"networking.k8s.io/v1", ResourceVersion:"2366", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync
	I0929 11:39:51.444940       7 controller.go:228] "Backend successfully reloaded"
	I0929 11:39:51.450504       7 event.go:377] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-9cc49f96f-h99dj", UID:"2bde8bfa-47f0-48da-9e63-cd8e2a0a38c6", APIVersion:"v1", ResourceVersion:"784", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	W0929 11:39:54.719235       7 controller.go:1232] Service "default/nginx" does not have any active Endpoint.
	I0929 11:39:54.719924       7 controller.go:214] "Configuration changes detected, backend reload required"
	I0929 11:39:54.771503       7 controller.go:228] "Backend successfully reloaded"
	I0929 11:39:54.772049       7 event.go:377] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-9cc49f96f-h99dj", UID:"2bde8bfa-47f0-48da-9e63-cd8e2a0a38c6", APIVersion:"v1", ResourceVersion:"784", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	W0929 11:39:58.057011       7 controller.go:1232] Service "default/nginx" does not have any active Endpoint.
	W0929 11:40:01.385065       7 controller.go:1232] Service "default/nginx" does not have any active Endpoint.
	W0929 11:40:04.718802       7 controller.go:1232] Service "default/nginx" does not have any active Endpoint.
	W0929 11:40:08.052750       7 controller.go:1232] Service "default/nginx" does not have any active Endpoint.
	W0929 11:40:11.385651       7 controller.go:1232] Service "default/nginx" does not have any active Endpoint.
	I0929 11:40:44.966647       7 status.go:309] "updating Ingress status" namespace="default" ingress="nginx-ingress" currentValue=null newValue=[{"ip":"192.168.39.76"}]
	I0929 11:40:44.973434       7 event.go:377] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"nginx-ingress", UID:"6c60e7a0-fa15-408e-810a-a4af1c88fe08", APIVersion:"networking.k8s.io/v1", ResourceVersion:"2672", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync
	W0929 11:40:44.974230       7 controller.go:1232] Service "default/nginx" does not have any active Endpoint.
	W0929 11:42:12.884706       7 controller.go:1232] Service "default/nginx" does not have any active Endpoint.
	W0929 11:42:23.602348       7 controller.go:1232] Service "default/nginx" does not have any active Endpoint.
	
	
	==> coredns [5629c377b605] <==
	[INFO] 10.244.0.7:52212 - 14403 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.001145753s
	[INFO] 10.244.0.7:52212 - 34526 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.001027976s
	[INFO] 10.244.0.7:52212 - 40091 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.002958291s
	[INFO] 10.244.0.7:52212 - 8101 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000112715s
	[INFO] 10.244.0.7:52212 - 55833 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000201304s
	[INFO] 10.244.0.7:52212 - 46374 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000813986s
	[INFO] 10.244.0.7:52212 - 13461 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.00014644s
	[INFO] 10.244.0.7:58134 - 57276 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000168682s
	[INFO] 10.244.0.7:58134 - 56902 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000087725s
	[INFO] 10.244.0.7:45806 - 23713 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000124662s
	[INFO] 10.244.0.7:45806 - 23950 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000142715s
	[INFO] 10.244.0.7:42777 - 55128 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000080735s
	[INFO] 10.244.0.7:42777 - 54892 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000216294s
	[INFO] 10.244.0.7:36398 - 14124 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000321419s
	[INFO] 10.244.0.7:36398 - 13929 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000550817s
	[INFO] 10.244.0.26:41550 - 7840 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00065483s
	[INFO] 10.244.0.26:48585 - 52888 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000202217s
	[INFO] 10.244.0.26:53114 - 55168 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000190191s
	[INFO] 10.244.0.26:47096 - 26187 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000662248s
	[INFO] 10.244.0.26:48999 - 38178 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00015298s
	[INFO] 10.244.0.26:58286 - 39587 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000285241s
	[INFO] 10.244.0.26:45238 - 61249 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.003642198s
	[INFO] 10.244.0.26:33573 - 52185 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 534 0.003922074s
	[INFO] 10.244.0.30:45249 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.002086838s
	[INFO] 10.244.0.30:35918 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000164605s
	
	
	==> describe nodes <==
	Name:               addons-214441
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-214441
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b8c533eb3dce8338d3d4e7231ea97d1c44ed6f81
	                    minikube.k8s.io/name=addons-214441
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_29T11_31_12_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-214441
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-214441"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Sep 2025 11:31:09 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-214441
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Sep 2025 11:45:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Sep 2025 11:40:14 +0000   Mon, 29 Sep 2025 11:31:06 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Sep 2025 11:40:14 +0000   Mon, 29 Sep 2025 11:31:06 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Sep 2025 11:40:14 +0000   Mon, 29 Sep 2025 11:31:06 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Sep 2025 11:40:14 +0000   Mon, 29 Sep 2025 11:31:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.76
	  Hostname:    addons-214441
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008588Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008588Ki
	  pods:               110
	System Info:
	  Machine ID:                 44179717398847cdb8d861dffe58e059
	  System UUID:                44179717-3988-47cd-b8d8-61dffe58e059
	  Boot ID:                    f083535d-5807-413a-9a6b-1a0bbe2d4432
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://28.4.0
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (21 in total)
	  Namespace                   Name                                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m41s
	  default                     nginx                                                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m23s
	  default                     task-pv-pod                                                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m6s
	  gadget                      gadget-xvvvf                                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  ingress-nginx               ingress-nginx-controller-9cc49f96f-h99dj                      100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         13m
	  kube-system                 amd-gpu-device-plugin-7jx7f                                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 coredns-66bc5c9577-fkh52                                      100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     13m
	  kube-system                 csi-hostpath-attacher-0                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 csi-hostpath-resizer-0                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 csi-hostpathplugin-8279f                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 etcd-addons-214441                                            100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         14m
	  kube-system                 kube-apiserver-addons-214441                                  250m (12%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-controller-manager-addons-214441                         200m (10%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-ingress-dns-minikube                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-proxy-d9fnb                                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-addons-214441                                  100m (5%)     0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 snapshot-controller-7d9fbc56b8-pw4g9                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 snapshot-controller-7d9fbc56b8-wvh2l                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 storage-provisioner                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  local-path-storage          helper-pod-create-pvc-80c6eaed-6b59-4fd5-b78c-ea16539d3681    0 (0%)        0 (0%)      0 (0%)           0 (0%)         15s
	  local-path-storage          local-path-provisioner-648f6765c9-fq5l2                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             260Mi (6%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 13m   kube-proxy       
	  Normal  Starting                 14m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  14m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  14m   kubelet          Node addons-214441 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m   kubelet          Node addons-214441 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m   kubelet          Node addons-214441 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           13m   node-controller  Node addons-214441 event: Registered Node addons-214441 in Controller
	  Normal  NodeReady                13m   kubelet          Node addons-214441 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.186219] kauditd_printk_skb: 164 callbacks suppressed
	[  +0.000058] kauditd_printk_skb: 275 callbacks suppressed
	[  +1.798616] kauditd_printk_skb: 343 callbacks suppressed
	[ +13.445646] kauditd_printk_skb: 68 callbacks suppressed
	[  +5.142447] kauditd_printk_skb: 20 callbacks suppressed
	[Sep29 11:32] kauditd_printk_skb: 41 callbacks suppressed
	[  +6.199632] kauditd_printk_skb: 38 callbacks suppressed
	[  +1.030429] kauditd_printk_skb: 78 callbacks suppressed
	[  +5.195773] kauditd_printk_skb: 75 callbacks suppressed
	[  +5.274224] kauditd_printk_skb: 150 callbacks suppressed
	[  +5.780886] kauditd_printk_skb: 68 callbacks suppressed
	[  +8.295767] kauditd_printk_skb: 56 callbacks suppressed
	[Sep29 11:39] kauditd_printk_skb: 2 callbacks suppressed
	[ +10.045350] kauditd_printk_skb: 59 callbacks suppressed
	[ +11.893143] kauditd_printk_skb: 41 callbacks suppressed
	[  +5.745446] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.704785] kauditd_printk_skb: 81 callbacks suppressed
	[Sep29 11:40] kauditd_printk_skb: 79 callbacks suppressed
	[  +2.308317] kauditd_printk_skb: 113 callbacks suppressed
	[  +1.203541] kauditd_printk_skb: 47 callbacks suppressed
	[Sep29 11:42] kauditd_printk_skb: 27 callbacks suppressed
	[  +9.517499] kauditd_printk_skb: 40 callbacks suppressed
	[  +5.729582] kauditd_printk_skb: 22 callbacks suppressed
	[Sep29 11:44] kauditd_printk_skb: 26 callbacks suppressed
	[Sep29 11:45] kauditd_printk_skb: 9 callbacks suppressed
	
	
	==> etcd [5df8c088591f] <==
	{"level":"info","ts":"2025-09-29T11:32:00.549416Z","caller":"traceutil/trace.go:172","msg":"trace[283960959] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1061; }","duration":"215.430561ms","start":"2025-09-29T11:32:00.333975Z","end":"2025-09-29T11:32:00.549406Z","steps":["trace[283960959] 'agreement among raft nodes before linearized reading'  (duration: 214.453965ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-29T11:32:00.549612Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"178.233017ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-29T11:32:00.549630Z","caller":"traceutil/trace.go:172","msg":"trace[1676271402] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1062; }","duration":"178.256779ms","start":"2025-09-29T11:32:00.371368Z","end":"2025-09-29T11:32:00.549625Z","steps":["trace[1676271402] 'agreement among raft nodes before linearized reading'  (duration: 178.210962ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-29T11:32:00.549775Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"133.256178ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-29T11:32:00.549795Z","caller":"traceutil/trace.go:172","msg":"trace[872905781] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1062; }","duration":"133.278789ms","start":"2025-09-29T11:32:00.416510Z","end":"2025-09-29T11:32:00.549789Z","steps":["trace[872905781] 'agreement among raft nodes before linearized reading'  (duration: 133.240765ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-29T11:32:22.619881Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"283.951682ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-29T11:32:22.619953Z","caller":"traceutil/trace.go:172","msg":"trace[256565612] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1138; }","duration":"284.054314ms","start":"2025-09-29T11:32:22.335884Z","end":"2025-09-29T11:32:22.619939Z","steps":["trace[256565612] 'range keys from in-memory index tree'  (duration: 283.898213ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-29T11:32:22.620417Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"203.038923ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-29T11:32:22.620455Z","caller":"traceutil/trace.go:172","msg":"trace[2141218366] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1138; }","duration":"203.079865ms","start":"2025-09-29T11:32:22.417365Z","end":"2025-09-29T11:32:22.620444Z","steps":["trace[2141218366] 'range keys from in-memory index tree'  (duration: 202.851561ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-29T11:32:37.446139Z","caller":"traceutil/trace.go:172","msg":"trace[1518739598] linearizableReadLoop","detail":"{readStateIndex:1281; appliedIndex:1281; }","duration":"111.376689ms","start":"2025-09-29T11:32:37.334743Z","end":"2025-09-29T11:32:37.446120Z","steps":["trace[1518739598] 'read index received'  (duration: 111.370356ms)","trace[1518739598] 'applied index is now lower than readState.Index'  (duration: 5.449µs)"],"step_count":2}
	{"level":"warn","ts":"2025-09-29T11:32:37.446365Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"111.596508ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-29T11:32:37.446409Z","caller":"traceutil/trace.go:172","msg":"trace[333303529] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1250; }","duration":"111.664223ms","start":"2025-09-29T11:32:37.334737Z","end":"2025-09-29T11:32:37.446401Z","steps":["trace[333303529] 'agreement among raft nodes before linearized reading'  (duration: 111.566754ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-29T11:32:37.447956Z","caller":"traceutil/trace.go:172","msg":"trace[1818807407] transaction","detail":"{read_only:false; response_revision:1251; number_of_response:1; }","duration":"216.083326ms","start":"2025-09-29T11:32:37.231864Z","end":"2025-09-29T11:32:37.447947Z","steps":["trace[1818807407] 'process raft request'  (duration: 214.333833ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-29T11:32:41.490882Z","caller":"traceutil/trace.go:172","msg":"trace[1943079177] linearizableReadLoop","detail":"{readStateIndex:1295; appliedIndex:1295; }","duration":"156.252408ms","start":"2025-09-29T11:32:41.334599Z","end":"2025-09-29T11:32:41.490852Z","steps":["trace[1943079177] 'read index received'  (duration: 156.245254ms)","trace[1943079177] 'applied index is now lower than readState.Index'  (duration: 4.49µs)"],"step_count":2}
	{"level":"warn","ts":"2025-09-29T11:32:41.491088Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"156.469181ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-29T11:32:41.491110Z","caller":"traceutil/trace.go:172","msg":"trace[366978766] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1264; }","duration":"156.509563ms","start":"2025-09-29T11:32:41.334595Z","end":"2025-09-29T11:32:41.491105Z","steps":["trace[366978766] 'agreement among raft nodes before linearized reading'  (duration: 156.436502ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-29T11:32:41.491567Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-09-29T11:32:41.150207Z","time spent":"341.358415ms","remote":"127.0.0.1:41482","response type":"/etcdserverpb.Lease/LeaseGrant","request count":-1,"request size":-1,"response count":-1,"response size":-1,"request content":""}
	{"level":"info","ts":"2025-09-29T11:39:57.948345Z","caller":"traceutil/trace.go:172","msg":"trace[1591406496] linearizableReadLoop","detail":"{readStateIndex:2551; appliedIndex:2551; }","duration":"124.72426ms","start":"2025-09-29T11:39:57.823478Z","end":"2025-09-29T11:39:57.948202Z","steps":["trace[1591406496] 'read index received'  (duration: 124.71863ms)","trace[1591406496] 'applied index is now lower than readState.Index'  (duration: 4.802µs)"],"step_count":2}
	{"level":"warn","ts":"2025-09-29T11:39:57.948549Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"125.025613ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-29T11:39:57.948597Z","caller":"traceutil/trace.go:172","msg":"trace[612703964] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:2421; }","duration":"125.116152ms","start":"2025-09-29T11:39:57.823474Z","end":"2025-09-29T11:39:57.948590Z","steps":["trace[612703964] 'agreement among raft nodes before linearized reading'  (duration: 124.997233ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-29T11:39:57.949437Z","caller":"traceutil/trace.go:172","msg":"trace[1306847484] transaction","detail":"{read_only:false; number_of_response:1; response_revision:2422; }","duration":"296.693601ms","start":"2025-09-29T11:39:57.652733Z","end":"2025-09-29T11:39:57.949427Z","steps":["trace[1306847484] 'process raft request'  (duration: 296.121623ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-29T11:39:58.302377Z","caller":"traceutil/trace.go:172","msg":"trace[126438438] transaction","detail":"{read_only:false; response_revision:2433; number_of_response:1; }","duration":"116.690338ms","start":"2025-09-29T11:39:58.185669Z","end":"2025-09-29T11:39:58.302359Z","steps":["trace[126438438] 'process raft request'  (duration: 107.946386ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-29T11:41:07.514630Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1800}
	{"level":"info","ts":"2025-09-29T11:41:07.635361Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1800,"took":"119.419717ms","hash":3783191704,"current-db-size-bytes":8732672,"current-db-size":"8.7 MB","current-db-size-in-use-bytes":5963776,"current-db-size-in-use":"6.0 MB"}
	{"level":"info","ts":"2025-09-29T11:41:07.635428Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":3783191704,"revision":1800,"compact-revision":-1}
	
	
	==> kernel <==
	 11:45:14 up 14 min,  0 users,  load average: 0.22, 0.56, 0.60
	Linux addons-214441 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Sep 18 15:48:18 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [b5368f01fa76] <==
	I0929 11:39:23.839444       1 handler.go:285] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	I0929 11:39:24.054959       1 handler.go:285] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	W0929 11:39:24.460545       1 cacher.go:182] Terminating all watchers from cacher jobs.batch.volcano.sh
	W0929 11:39:24.467415       1 cacher.go:182] Terminating all watchers from cacher podgroups.scheduling.volcano.sh
	W0929 11:39:24.500846       1 cacher.go:182] Terminating all watchers from cacher queues.scheduling.volcano.sh
	W0929 11:39:24.516151       1 cacher.go:182] Terminating all watchers from cacher hypernodes.topology.volcano.sh
	W0929 11:39:24.580645       1 cacher.go:182] Terminating all watchers from cacher numatopologies.nodeinfo.volcano.sh
	W0929 11:39:25.117972       1 cacher.go:182] Terminating all watchers from cacher jobflows.flow.volcano.sh
	W0929 11:39:25.322421       1 cacher.go:182] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
	E0929 11:39:42.471472       1 conn.go:339] Error on socket receive: read tcp 192.168.39.76:8443->192.168.39.1:44978: use of closed network connection
	E0929 11:39:42.758211       1 conn.go:339] Error on socket receive: read tcp 192.168.39.76:8443->192.168.39.1:45000: use of closed network connection
	I0929 11:39:45.674152       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:39:51.379831       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I0929 11:39:51.635969       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.106.133.174"}
	I0929 11:39:52.039060       1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.99.167.87"}
	I0929 11:40:21.576337       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0929 11:40:21.997121       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:41:04.368312       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:41:09.156786       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0929 11:41:32.070520       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:42:20.474077       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:42:56.312150       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:43:33.051574       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:44:06.773562       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:44:43.393063       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	
	
	==> kube-controller-manager [b7a56dc83eb1] <==
	E0929 11:43:53.043994       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 11:43:53.724552       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 11:43:53.725759       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 11:44:09.946117       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 11:44:09.947451       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 11:44:20.945461       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 11:44:20.947155       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 11:44:24.939246       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 11:44:24.940440       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 11:44:26.970611       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 11:44:26.972961       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 11:44:41.264355       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 11:44:41.265551       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 11:44:41.988721       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 11:44:41.990424       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 11:44:44.430948       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 11:44:44.433503       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 11:44:44.564706       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 11:44:44.566082       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 11:44:53.380805       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 11:44:53.382158       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 11:44:58.252435       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 11:44:58.253816       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 11:45:14.461968       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 11:45:14.464025       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	
	
	==> kube-proxy [cf32cea21506] <==
	I0929 11:31:18.966107       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0929 11:31:19.067553       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0929 11:31:19.067585       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.76"]
	E0929 11:31:19.067663       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0929 11:31:19.367843       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I0929 11:31:19.367925       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0929 11:31:19.367957       1 server_linux.go:132] "Using iptables Proxier"
	I0929 11:31:19.410838       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0929 11:31:19.411105       1 server.go:527] "Version info" version="v1.34.0"
	I0929 11:31:19.411117       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 11:31:19.438109       1 config.go:200] "Starting service config controller"
	I0929 11:31:19.438145       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0929 11:31:19.438165       1 config.go:106] "Starting endpoint slice config controller"
	I0929 11:31:19.438169       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0929 11:31:19.438197       1 config.go:403] "Starting serviceCIDR config controller"
	I0929 11:31:19.438201       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0929 11:31:19.443612       1 config.go:309] "Starting node config controller"
	I0929 11:31:19.443644       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0929 11:31:19.443650       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0929 11:31:19.552512       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0929 11:31:19.552650       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0929 11:31:19.639397       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [1b712309a590] <==
	E0929 11:31:09.221196       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0929 11:31:09.221236       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0929 11:31:09.222033       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E0929 11:31:09.225006       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0929 11:31:09.225514       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E0929 11:31:09.225802       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0929 11:31:09.225865       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0929 11:31:09.225922       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0929 11:31:09.226012       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0929 11:31:09.226045       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0929 11:31:10.048406       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0929 11:31:10.133629       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0929 11:31:10.190360       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0929 11:31:10.277104       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0929 11:31:10.293798       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0929 11:31:10.302970       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0929 11:31:10.326331       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0929 11:31:10.346485       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E0929 11:31:10.373940       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0929 11:31:10.450205       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0929 11:31:10.476705       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0929 11:31:10.548049       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0929 11:31:10.584420       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0929 11:31:10.696768       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I0929 11:31:12.791660       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 29 11:44:30 addons-214441 kubelet[2504]: E0929 11:44:30.053203    2504 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="182f1b86-e027-4d79-a5a9-272a05688c3b"
	Sep 29 11:44:30 addons-214441 kubelet[2504]: I0929 11:44:30.063757    2504 reconciler_common.go:299] "Volume detached for volume \"script\" (UniqueName: \"kubernetes.io/configmap/42986ce5-40d2-4dba-8e8b-75987cd5f446-script\") on node \"addons-214441\" DevicePath \"\""
	Sep 29 11:44:30 addons-214441 kubelet[2504]: I0929 11:44:30.063969    2504 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-h6vsf\" (UniqueName: \"kubernetes.io/projected/42986ce5-40d2-4dba-8e8b-75987cd5f446-kube-api-access-h6vsf\") on node \"addons-214441\" DevicePath \"\""
	Sep 29 11:44:30 addons-214441 kubelet[2504]: I0929 11:44:30.063986    2504 reconciler_common.go:299] "Volume detached for volume \"data\" (UniqueName: \"kubernetes.io/host-path/42986ce5-40d2-4dba-8e8b-75987cd5f446-data\") on node \"addons-214441\" DevicePath \"\""
	Sep 29 11:44:32 addons-214441 kubelet[2504]: E0929 11:44:32.047053    2504 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="aff7bf59-352b-45d6-9449-f442a6b25e27"
	Sep 29 11:44:32 addons-214441 kubelet[2504]: I0929 11:44:32.062442    2504 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42986ce5-40d2-4dba-8e8b-75987cd5f446" path="/var/lib/kubelet/pods/42986ce5-40d2-4dba-8e8b-75987cd5f446/volumes"
	Sep 29 11:44:42 addons-214441 kubelet[2504]: E0929 11:44:42.059331    2504 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="182f1b86-e027-4d79-a5a9-272a05688c3b"
	Sep 29 11:44:43 addons-214441 kubelet[2504]: E0929 11:44:43.047148    2504 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="aff7bf59-352b-45d6-9449-f442a6b25e27"
	Sep 29 11:44:53 addons-214441 kubelet[2504]: E0929 11:44:53.050620    2504 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="182f1b86-e027-4d79-a5a9-272a05688c3b"
	Sep 29 11:44:56 addons-214441 kubelet[2504]: I0929 11:44:56.048381    2504 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-7jx7f" secret="" err="secret \"gcp-auth\" not found"
	Sep 29 11:44:57 addons-214441 kubelet[2504]: E0929 11:44:57.046499    2504 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="aff7bf59-352b-45d6-9449-f442a6b25e27"
	Sep 29 11:44:59 addons-214441 kubelet[2504]: I0929 11:44:59.805077    2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/host-path/3ffabbe8-6bb8-4c81-a535-53f9b68c8721-data\") pod \"helper-pod-create-pvc-80c6eaed-6b59-4fd5-b78c-ea16539d3681\" (UID: \"3ffabbe8-6bb8-4c81-a535-53f9b68c8721\") " pod="local-path-storage/helper-pod-create-pvc-80c6eaed-6b59-4fd5-b78c-ea16539d3681"
	Sep 29 11:44:59 addons-214441 kubelet[2504]: I0929 11:44:59.805674    2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvkjr\" (UniqueName: \"kubernetes.io/projected/3ffabbe8-6bb8-4c81-a535-53f9b68c8721-kube-api-access-pvkjr\") pod \"helper-pod-create-pvc-80c6eaed-6b59-4fd5-b78c-ea16539d3681\" (UID: \"3ffabbe8-6bb8-4c81-a535-53f9b68c8721\") " pod="local-path-storage/helper-pod-create-pvc-80c6eaed-6b59-4fd5-b78c-ea16539d3681"
	Sep 29 11:44:59 addons-214441 kubelet[2504]: I0929 11:44:59.805742    2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"script\" (UniqueName: \"kubernetes.io/configmap/3ffabbe8-6bb8-4c81-a535-53f9b68c8721-script\") pod \"helper-pod-create-pvc-80c6eaed-6b59-4fd5-b78c-ea16539d3681\" (UID: \"3ffabbe8-6bb8-4c81-a535-53f9b68c8721\") " pod="local-path-storage/helper-pod-create-pvc-80c6eaed-6b59-4fd5-b78c-ea16539d3681"
	Sep 29 11:45:00 addons-214441 kubelet[2504]: E0929 11:45:00.442716    2504 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Sep 29 11:45:00 addons-214441 kubelet[2504]: E0929 11:45:00.442777    2504 kuberuntime_image.go:43] "Failed to pull image" err="Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Sep 29 11:45:00 addons-214441 kubelet[2504]: E0929 11:45:00.442884    2504 kuberuntime_manager.go:1449] "Unhandled Error" err="container helper-pod start failed in pod helper-pod-create-pvc-80c6eaed-6b59-4fd5-b78c-ea16539d3681_local-path-storage(3ffabbe8-6bb8-4c81-a535-53f9b68c8721): ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Sep 29 11:45:00 addons-214441 kubelet[2504]: E0929 11:45:00.442923    2504 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"helper-pod\" with ErrImagePull: \"Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="local-path-storage/helper-pod-create-pvc-80c6eaed-6b59-4fd5-b78c-ea16539d3681" podUID="3ffabbe8-6bb8-4c81-a535-53f9b68c8721"
	Sep 29 11:45:01 addons-214441 kubelet[2504]: E0929 11:45:01.416965    2504 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"helper-pod\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="local-path-storage/helper-pod-create-pvc-80c6eaed-6b59-4fd5-b78c-ea16539d3681" podUID="3ffabbe8-6bb8-4c81-a535-53f9b68c8721"
	Sep 29 11:45:07 addons-214441 kubelet[2504]: E0929 11:45:07.047881    2504 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="182f1b86-e027-4d79-a5a9-272a05688c3b"
	Sep 29 11:45:12 addons-214441 kubelet[2504]: E0929 11:45:12.045945    2504 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="aff7bf59-352b-45d6-9449-f442a6b25e27"
	Sep 29 11:45:12 addons-214441 kubelet[2504]: E0929 11:45:12.141786    2504 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Sep 29 11:45:12 addons-214441 kubelet[2504]: E0929 11:45:12.141851    2504 kuberuntime_image.go:43] "Failed to pull image" err="Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Sep 29 11:45:12 addons-214441 kubelet[2504]: E0929 11:45:12.141943    2504 kuberuntime_manager.go:1449] "Unhandled Error" err="container helper-pod start failed in pod helper-pod-create-pvc-80c6eaed-6b59-4fd5-b78c-ea16539d3681_local-path-storage(3ffabbe8-6bb8-4c81-a535-53f9b68c8721): ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Sep 29 11:45:12 addons-214441 kubelet[2504]: E0929 11:45:12.141974    2504 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"helper-pod\" with ErrImagePull: \"Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="local-path-storage/helper-pod-create-pvc-80c6eaed-6b59-4fd5-b78c-ea16539d3681" podUID="3ffabbe8-6bb8-4c81-a535-53f9b68c8721"
	
	
	==> storage-provisioner [388ea771a1c8] <==
	W0929 11:44:50.619223       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:44:52.623428       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:44:52.631758       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:44:54.636225       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:44:54.646017       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:44:56.650195       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:44:56.661174       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:44:58.671957       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:44:58.680876       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:45:00.684163       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:45:00.689409       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:45:02.696543       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:45:02.707364       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:45:04.711581       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:45:04.718073       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:45:06.723237       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:45:06.729997       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:45:08.734707       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:45:08.743068       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:45:10.748867       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:45:10.758951       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:45:12.764148       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:45:12.775389       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:45:14.779249       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:45:14.787409       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-214441 -n addons-214441
helpers_test.go:269: (dbg) Run:  kubectl --context addons-214441 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: nginx task-pv-pod test-local-path ingress-nginx-admission-create-s6nvq ingress-nginx-admission-patch-tp6tp helper-pod-create-pvc-80c6eaed-6b59-4fd5-b78c-ea16539d3681
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/LocalPath]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-214441 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-s6nvq ingress-nginx-admission-patch-tp6tp helper-pod-create-pvc-80c6eaed-6b59-4fd5-b78c-ea16539d3681
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-214441 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-s6nvq ingress-nginx-admission-patch-tp6tp helper-pod-create-pvc-80c6eaed-6b59-4fd5-b78c-ea16539d3681: exit status 1 (99.593492ms)

                                                
                                                
-- stdout --
	Name:             nginx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-214441/192.168.39.76
	Start Time:       Mon, 29 Sep 2025 11:39:51 +0000
	Labels:           run=nginx
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.28
	IPs:
	  IP:  10.244.0.28
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-rdmgz (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-rdmgz:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  5m24s                  default-scheduler  Successfully assigned default/nginx to addons-214441
	  Warning  Failed     5m23s                  kubelet            Failed to pull image "docker.io/nginx:alpine": toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    2m30s (x5 over 5m23s)  kubelet            Pulling image "docker.io/nginx:alpine"
	  Warning  Failed     2m30s (x5 over 5m23s)  kubelet            Error: ErrImagePull
	  Warning  Failed     2m30s (x4 over 5m9s)   kubelet            Failed to pull image "docker.io/nginx:alpine": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    22s (x21 over 5m22s)   kubelet            Back-off pulling image "docker.io/nginx:alpine"
	  Warning  Failed     22s (x21 over 5m22s)   kubelet            Error: ImagePullBackOff
	
	
	Name:             task-pv-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-214441/192.168.39.76
	Start Time:       Mon, 29 Sep 2025 11:40:08 +0000
	Labels:           app=task-pv-pod
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.31
	IPs:
	  IP:  10.244.0.31
	Containers:
	  task-pv-container:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           80/TCP (http-server)
	    Host Port:      0/TCP (http-server)
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /usr/share/nginx/html from task-pv-storage (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-kt6ld (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  task-pv-storage:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  hpvc
	    ReadOnly:   false
	  kube-api-access-kt6ld:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  5m7s                  default-scheduler  Successfully assigned default/task-pv-pod to addons-214441
	  Warning  Failed     4m23s                 kubelet            Failed to pull image "docker.io/nginx": toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    2m12s (x5 over 5m7s)  kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     2m12s (x4 over 5m7s)  kubelet            Failed to pull image "docker.io/nginx": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     2m12s (x5 over 5m7s)  kubelet            Error: ErrImagePull
	  Normal   BackOff    3s (x20 over 5m6s)    kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     3s (x20 over 5m6s)    kubelet            Error: ImagePullBackOff
	
	
	Name:             test-local-path
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           run=test-local-path
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  busybox:
	    Image:      busybox:stable
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sh
	      -c
	      echo 'local-path-provisioner' > /test/file1
	    Environment:  <none>
	    Mounts:
	      /test from data (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-tffd7 (ro)
	Volumes:
	  data:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  test-pvc
	    ReadOnly:   false
	  kube-api-access-tffd7:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:                      <none>

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-s6nvq" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-tp6tp" not found
	Error from server (NotFound): pods "helper-pod-create-pvc-80c6eaed-6b59-4fd5-b78c-ea16539d3681" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-214441 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-s6nvq ingress-nginx-admission-patch-tp6tp helper-pod-create-pvc-80c6eaed-6b59-4fd5-b78c-ea16539d3681: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-214441 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-214441 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (42.887705672s)
--- FAIL: TestAddons/parallel/LocalPath (345.26s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (128.26s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-8b84x" [776cffb2-d8ee-4337-a96e-2a5d06549491] Pending / Ready:ContainersNotReady (containers with unready status: [yakd]) / ContainersReady:ContainersNotReady (containers with unready status: [yakd])
helpers_test.go:337: TestAddons/parallel/Yakd: WARNING: pod list for "yakd-dashboard" "app.kubernetes.io/name=yakd-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
addons_test.go:1047: ***** TestAddons/parallel/Yakd: pod "app.kubernetes.io/name=yakd-dashboard" failed to start within 2m0s: context deadline exceeded ****
addons_test.go:1047: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-214441 -n addons-214441
addons_test.go:1047: TestAddons/parallel/Yakd: showing logs for failed pods as of 2025-09-29 11:42:10.184364783 +0000 UTC m=+716.339098470
addons_test.go:1047: (dbg) Run:  kubectl --context addons-214441 describe po yakd-dashboard-5ff678cb9-8b84x -n yakd-dashboard
addons_test.go:1047: (dbg) kubectl --context addons-214441 describe po yakd-dashboard-5ff678cb9-8b84x -n yakd-dashboard:
Name:             yakd-dashboard-5ff678cb9-8b84x
Namespace:        yakd-dashboard
Priority:         0
Service Account:  yakd-dashboard
Node:             addons-214441/192.168.39.76
Start Time:       Mon, 29 Sep 2025 11:31:27 +0000
Labels:           app.kubernetes.io/instance=yakd-dashboard
app.kubernetes.io/name=yakd-dashboard
gcp-auth-skip-secret=true
pod-template-hash=5ff678cb9
Annotations:      <none>
Status:           Pending
IP:               10.244.0.11
IPs:
IP:           10.244.0.11
Controlled By:  ReplicaSet/yakd-dashboard-5ff678cb9
Containers:
yakd:
Container ID:   
Image:          docker.io/marcnuri/yakd:0.0.5@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624
Image ID:       
Port:           8080/TCP (http)
Host Port:      0/TCP (http)
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Limits:
memory:  256Mi
Requests:
memory:   128Mi
Liveness:   http-get http://:8080/ delay=10s timeout=10s period=10s #success=1 #failure=3
Readiness:  http-get http://:8080/ delay=10s timeout=10s period=10s #success=1 #failure=3
Environment:
KUBERNETES_NAMESPACE:  yakd-dashboard (v1:metadata.namespace)
HOSTNAME:              yakd-dashboard-5ff678cb9-8b84x (v1:metadata.name)
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-x6qsk (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-x6qsk:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              kubernetes.io/os=linux
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned yakd-dashboard/yakd-dashboard-5ff678cb9-8b84x to addons-214441
Warning  Failed     9m53s                   kubelet            Failed to pull image "docker.io/marcnuri/yakd:0.0.5@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624": toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   Pulling    7m (x5 over 10m)        kubelet            Pulling image "docker.io/marcnuri/yakd:0.0.5@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624"
Warning  Failed     7m (x5 over 9m53s)      kubelet            Error: ErrImagePull
Warning  Failed     7m (x4 over 9m37s)      kubelet            Failed to pull image "docker.io/marcnuri/yakd:0.0.5@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     4m47s (x20 over 9m52s)  kubelet            Error: ImagePullBackOff
Normal   BackOff    39s (x38 over 9m52s)    kubelet            Back-off pulling image "docker.io/marcnuri/yakd:0.0.5@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624"
addons_test.go:1047: (dbg) Run:  kubectl --context addons-214441 logs yakd-dashboard-5ff678cb9-8b84x -n yakd-dashboard
addons_test.go:1047: (dbg) Non-zero exit: kubectl --context addons-214441 logs yakd-dashboard-5ff678cb9-8b84x -n yakd-dashboard: exit status 1 (76.868554ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "yakd" in pod "yakd-dashboard-5ff678cb9-8b84x" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
addons_test.go:1047: kubectl --context addons-214441 logs yakd-dashboard-5ff678cb9-8b84x -n yakd-dashboard: exit status 1
addons_test.go:1048: failed waiting for YAKD - Kubernetes Dashboard pod: app.kubernetes.io/name=yakd-dashboard within 2m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Yakd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-214441 -n addons-214441
helpers_test.go:252: <<< TestAddons/parallel/Yakd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Yakd]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-214441 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-214441 logs -n 25: (1.075035355s)
helpers_test.go:260: TestAddons/parallel/Yakd logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-383930 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=kvm2  --auto-update-drivers=false                                                                                                                                                                                                                                                                                              │ download-only-383930 │ jenkins │ v1.37.0 │ 29 Sep 25 11:30 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                     │ minikube             │ jenkins │ v1.37.0 │ 29 Sep 25 11:30 UTC │ 29 Sep 25 11:30 UTC │
	│ delete  │ -p download-only-383930                                                                                                                                                                                                                                                                                                                                                                                                                                                   │ download-only-383930 │ jenkins │ v1.37.0 │ 29 Sep 25 11:30 UTC │ 29 Sep 25 11:30 UTC │
	│ start   │ -o=json --download-only -p download-only-221115 --force --alsologtostderr --kubernetes-version=v1.34.0 --container-runtime=docker --driver=kvm2  --auto-update-drivers=false                                                                                                                                                                                                                                                                                              │ download-only-221115 │ jenkins │ v1.37.0 │ 29 Sep 25 11:30 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                     │ minikube             │ jenkins │ v1.37.0 │ 29 Sep 25 11:30 UTC │ 29 Sep 25 11:30 UTC │
	│ delete  │ -p download-only-221115                                                                                                                                                                                                                                                                                                                                                                                                                                                   │ download-only-221115 │ jenkins │ v1.37.0 │ 29 Sep 25 11:30 UTC │ 29 Sep 25 11:30 UTC │
	│ delete  │ -p download-only-383930                                                                                                                                                                                                                                                                                                                                                                                                                                                   │ download-only-383930 │ jenkins │ v1.37.0 │ 29 Sep 25 11:30 UTC │ 29 Sep 25 11:30 UTC │
	│ delete  │ -p download-only-221115                                                                                                                                                                                                                                                                                                                                                                                                                                                   │ download-only-221115 │ jenkins │ v1.37.0 │ 29 Sep 25 11:30 UTC │ 29 Sep 25 11:30 UTC │
	│ start   │ --download-only -p binary-mirror-005122 --alsologtostderr --binary-mirror http://127.0.0.1:35607 --driver=kvm2  --auto-update-drivers=false                                                                                                                                                                                                                                                                                                                               │ binary-mirror-005122 │ jenkins │ v1.37.0 │ 29 Sep 25 11:30 UTC │                     │
	│ delete  │ -p binary-mirror-005122                                                                                                                                                                                                                                                                                                                                                                                                                                                   │ binary-mirror-005122 │ jenkins │ v1.37.0 │ 29 Sep 25 11:30 UTC │ 29 Sep 25 11:30 UTC │
	│ addons  │ disable dashboard -p addons-214441                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-214441        │ jenkins │ v1.37.0 │ 29 Sep 25 11:30 UTC │                     │
	│ addons  │ enable dashboard -p addons-214441                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-214441        │ jenkins │ v1.37.0 │ 29 Sep 25 11:30 UTC │                     │
	│ start   │ -p addons-214441 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --auto-update-drivers=false --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-214441        │ jenkins │ v1.37.0 │ 29 Sep 25 11:30 UTC │ 29 Sep 25 11:33 UTC │
	│ addons  │ addons-214441 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                               │ addons-214441        │ jenkins │ v1.37.0 │ 29 Sep 25 11:39 UTC │ 29 Sep 25 11:39 UTC │
	│ addons  │ addons-214441 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-214441        │ jenkins │ v1.37.0 │ 29 Sep 25 11:39 UTC │ 29 Sep 25 11:39 UTC │
	│ addons  │ enable headlamp -p addons-214441 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                   │ addons-214441        │ jenkins │ v1.37.0 │ 29 Sep 25 11:39 UTC │ 29 Sep 25 11:39 UTC │
	│ addons  │ addons-214441 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-214441        │ jenkins │ v1.37.0 │ 29 Sep 25 11:39 UTC │ 29 Sep 25 11:39 UTC │
	│ addons  │ addons-214441 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-214441        │ jenkins │ v1.37.0 │ 29 Sep 25 11:40 UTC │ 29 Sep 25 11:40 UTC │
	│ addons  │ addons-214441 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-214441        │ jenkins │ v1.37.0 │ 29 Sep 25 11:40 UTC │ 29 Sep 25 11:40 UTC │
	│ ip      │ addons-214441 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                          │ addons-214441        │ jenkins │ v1.37.0 │ 29 Sep 25 11:40 UTC │ 29 Sep 25 11:40 UTC │
	│ addons  │ addons-214441 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-214441        │ jenkins │ v1.37.0 │ 29 Sep 25 11:40 UTC │ 29 Sep 25 11:40 UTC │
	│ addons  │ addons-214441 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-214441        │ jenkins │ v1.37.0 │ 29 Sep 25 11:40 UTC │ 29 Sep 25 11:40 UTC │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-214441                                                                                                                                                                                                                                                                                                                                                                                            │ addons-214441        │ jenkins │ v1.37.0 │ 29 Sep 25 11:40 UTC │ 29 Sep 25 11:40 UTC │
	│ addons  │ addons-214441 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-214441        │ jenkins │ v1.37.0 │ 29 Sep 25 11:40 UTC │ 29 Sep 25 11:40 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/29 11:30:26
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0929 11:30:26.464374  595895 out.go:360] Setting OutFile to fd 1 ...
	I0929 11:30:26.464481  595895 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 11:30:26.464487  595895 out.go:374] Setting ErrFile to fd 2...
	I0929 11:30:26.464493  595895 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 11:30:26.464787  595895 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21654-591397/.minikube/bin
	I0929 11:30:26.465454  595895 out.go:368] Setting JSON to false
	I0929 11:30:26.466447  595895 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":4374,"bootTime":1759141052,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0929 11:30:26.466553  595895 start.go:140] virtualization: kvm guest
	I0929 11:30:26.468688  595895 out.go:179] * [addons-214441] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0929 11:30:26.470181  595895 out.go:179]   - MINIKUBE_LOCATION=21654
	I0929 11:30:26.470220  595895 notify.go:220] Checking for updates...
	I0929 11:30:26.473145  595895 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0929 11:30:26.474634  595895 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21654-591397/kubeconfig
	I0929 11:30:26.475793  595895 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21654-591397/.minikube
	I0929 11:30:26.477353  595895 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0929 11:30:26.478534  595895 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0929 11:30:26.479959  595895 driver.go:421] Setting default libvirt URI to qemu:///system
	I0929 11:30:26.513451  595895 out.go:179] * Using the kvm2 driver based on user configuration
	I0929 11:30:26.514622  595895 start.go:304] selected driver: kvm2
	I0929 11:30:26.514644  595895 start.go:924] validating driver "kvm2" against <nil>
	I0929 11:30:26.514659  595895 start.go:935] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0929 11:30:26.515675  595895 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 11:30:26.515785  595895 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21654-591397/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0929 11:30:26.530531  595895 install.go:163] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I0929 11:30:26.530568  595895 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21654-591397/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0929 11:30:26.545187  595895 install.go:163] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I0929 11:30:26.545244  595895 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0929 11:30:26.545491  595895 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0929 11:30:26.545527  595895 cni.go:84] Creating CNI manager for ""
	I0929 11:30:26.545570  595895 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0929 11:30:26.545579  595895 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0929 11:30:26.545628  595895 start.go:348] cluster config:
	{Name:addons-214441 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-214441 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: Network
Plugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0
s}
	I0929 11:30:26.545714  595895 iso.go:125] acquiring lock: {Name:mk3bf2644aacab696b9f4d566a6d81a30d75b71a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 11:30:26.547400  595895 out.go:179] * Starting "addons-214441" primary control-plane node in "addons-214441" cluster
	I0929 11:30:26.548855  595895 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0929 11:30:26.548909  595895 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21654-591397/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4
	I0929 11:30:26.548918  595895 cache.go:58] Caching tarball of preloaded images
	I0929 11:30:26.549035  595895 preload.go:172] Found /home/jenkins/minikube-integration/21654-591397/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0929 11:30:26.549046  595895 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on docker
	I0929 11:30:26.549389  595895 profile.go:143] Saving config to /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/addons-214441/config.json ...
	I0929 11:30:26.549415  595895 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/addons-214441/config.json: {Name:mka28e9e486990f30eb3eb321797c26d13a435f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 11:30:26.549559  595895 start.go:360] acquireMachinesLock for addons-214441: {Name:mka3370f06ebed6e47b43729e748683065f344f5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0929 11:30:26.549614  595895 start.go:364] duration metric: took 40.43µs to acquireMachinesLock for "addons-214441"
	I0929 11:30:26.549633  595895 start.go:93] Provisioning new machine with config: &{Name:addons-214441 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 Clu
sterName:addons-214441 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizatio
ns:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0929 11:30:26.549681  595895 start.go:125] createHost starting for "" (driver="kvm2")
	I0929 11:30:26.551210  595895 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I0929 11:30:26.551360  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:30:26.551417  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:30:26.564991  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37291
	I0929 11:30:26.565640  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:30:26.566242  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:30:26.566262  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:30:26.566742  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:30:26.566933  595895 main.go:141] libmachine: (addons-214441) Calling .GetMachineName
	I0929 11:30:26.567150  595895 main.go:141] libmachine: (addons-214441) Calling .DriverName
	I0929 11:30:26.567316  595895 start.go:159] libmachine.API.Create for "addons-214441" (driver="kvm2")
	I0929 11:30:26.567351  595895 client.go:168] LocalClient.Create starting
	I0929 11:30:26.567402  595895 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21654-591397/.minikube/certs/ca.pem
	I0929 11:30:26.955780  595895 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21654-591397/.minikube/certs/cert.pem
	I0929 11:30:27.214636  595895 main.go:141] libmachine: Running pre-create checks...
	I0929 11:30:27.214665  595895 main.go:141] libmachine: (addons-214441) Calling .PreCreateCheck
	I0929 11:30:27.215304  595895 main.go:141] libmachine: (addons-214441) Calling .GetConfigRaw
	I0929 11:30:27.215869  595895 main.go:141] libmachine: Creating machine...
	I0929 11:30:27.215887  595895 main.go:141] libmachine: (addons-214441) Calling .Create
	I0929 11:30:27.216119  595895 main.go:141] libmachine: (addons-214441) creating domain...
	I0929 11:30:27.216141  595895 main.go:141] libmachine: (addons-214441) creating network...
	I0929 11:30:27.217698  595895 main.go:141] libmachine: (addons-214441) DBG | found existing default network
	I0929 11:30:27.217987  595895 main.go:141] libmachine: (addons-214441) DBG | <network>
	I0929 11:30:27.218041  595895 main.go:141] libmachine: (addons-214441) DBG |   <name>default</name>
	I0929 11:30:27.218077  595895 main.go:141] libmachine: (addons-214441) DBG |   <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	I0929 11:30:27.218099  595895 main.go:141] libmachine: (addons-214441) DBG |   <forward mode='nat'>
	I0929 11:30:27.218124  595895 main.go:141] libmachine: (addons-214441) DBG |     <nat>
	I0929 11:30:27.218134  595895 main.go:141] libmachine: (addons-214441) DBG |       <port start='1024' end='65535'/>
	I0929 11:30:27.218144  595895 main.go:141] libmachine: (addons-214441) DBG |     </nat>
	I0929 11:30:27.218151  595895 main.go:141] libmachine: (addons-214441) DBG |   </forward>
	I0929 11:30:27.218161  595895 main.go:141] libmachine: (addons-214441) DBG |   <bridge name='virbr0' stp='on' delay='0'/>
	I0929 11:30:27.218190  595895 main.go:141] libmachine: (addons-214441) DBG |   <mac address='52:54:00:10:a2:1d'/>
	I0929 11:30:27.218203  595895 main.go:141] libmachine: (addons-214441) DBG |   <ip address='192.168.122.1' netmask='255.255.255.0'>
	I0929 11:30:27.218212  595895 main.go:141] libmachine: (addons-214441) DBG |     <dhcp>
	I0929 11:30:27.218222  595895 main.go:141] libmachine: (addons-214441) DBG |       <range start='192.168.122.2' end='192.168.122.254'/>
	I0929 11:30:27.218232  595895 main.go:141] libmachine: (addons-214441) DBG |     </dhcp>
	I0929 11:30:27.218245  595895 main.go:141] libmachine: (addons-214441) DBG |   </ip>
	I0929 11:30:27.218256  595895 main.go:141] libmachine: (addons-214441) DBG | </network>
	I0929 11:30:27.218263  595895 main.go:141] libmachine: (addons-214441) DBG | 
	I0929 11:30:27.219018  595895 main.go:141] libmachine: (addons-214441) DBG | I0929 11:30:27.218796  595923 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000200f10}
	I0929 11:30:27.219127  595895 main.go:141] libmachine: (addons-214441) DBG | defining private network:
	I0929 11:30:27.219156  595895 main.go:141] libmachine: (addons-214441) DBG | 
	I0929 11:30:27.219168  595895 main.go:141] libmachine: (addons-214441) DBG | <network>
	I0929 11:30:27.219179  595895 main.go:141] libmachine: (addons-214441) DBG |   <name>mk-addons-214441</name>
	I0929 11:30:27.219187  595895 main.go:141] libmachine: (addons-214441) DBG |   <dns enable='no'/>
	I0929 11:30:27.219194  595895 main.go:141] libmachine: (addons-214441) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0929 11:30:27.219200  595895 main.go:141] libmachine: (addons-214441) DBG |     <dhcp>
	I0929 11:30:27.219208  595895 main.go:141] libmachine: (addons-214441) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0929 11:30:27.219214  595895 main.go:141] libmachine: (addons-214441) DBG |     </dhcp>
	I0929 11:30:27.219218  595895 main.go:141] libmachine: (addons-214441) DBG |   </ip>
	I0929 11:30:27.219224  595895 main.go:141] libmachine: (addons-214441) DBG | </network>
	I0929 11:30:27.219227  595895 main.go:141] libmachine: (addons-214441) DBG | 
	I0929 11:30:27.225021  595895 main.go:141] libmachine: (addons-214441) DBG | creating private network mk-addons-214441 192.168.39.0/24...
	I0929 11:30:27.300287  595895 main.go:141] libmachine: (addons-214441) DBG | private network mk-addons-214441 192.168.39.0/24 created
	I0929 11:30:27.300635  595895 main.go:141] libmachine: (addons-214441) DBG | <network>
	I0929 11:30:27.300651  595895 main.go:141] libmachine: (addons-214441) DBG |   <name>mk-addons-214441</name>
	I0929 11:30:27.300675  595895 main.go:141] libmachine: (addons-214441) setting up store path in /home/jenkins/minikube-integration/21654-591397/.minikube/machines/addons-214441 ...
	I0929 11:30:27.300695  595895 main.go:141] libmachine: (addons-214441) building disk image from file:///home/jenkins/minikube-integration/21654-591397/.minikube/cache/iso/amd64/minikube-v1.37.0-1758198818-20370-amd64.iso
	I0929 11:30:27.300713  595895 main.go:141] libmachine: (addons-214441) DBG |   <uuid>9d6191f7-7df6-4691-bff3-3dbacc8ac925</uuid>
	I0929 11:30:27.300719  595895 main.go:141] libmachine: (addons-214441) DBG |   <bridge name='virbr1' stp='on' delay='0'/>
	I0929 11:30:27.300726  595895 main.go:141] libmachine: (addons-214441) DBG |   <mac address='52:54:00:ff:bc:22'/>
	I0929 11:30:27.300730  595895 main.go:141] libmachine: (addons-214441) DBG |   <dns enable='no'/>
	I0929 11:30:27.300736  595895 main.go:141] libmachine: (addons-214441) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0929 11:30:27.300741  595895 main.go:141] libmachine: (addons-214441) DBG |     <dhcp>
	I0929 11:30:27.300747  595895 main.go:141] libmachine: (addons-214441) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0929 11:30:27.300754  595895 main.go:141] libmachine: (addons-214441) DBG |     </dhcp>
	I0929 11:30:27.300758  595895 main.go:141] libmachine: (addons-214441) DBG |   </ip>
	I0929 11:30:27.300763  595895 main.go:141] libmachine: (addons-214441) DBG | </network>
	I0929 11:30:27.300770  595895 main.go:141] libmachine: (addons-214441) DBG | 
	I0929 11:30:27.300780  595895 main.go:141] libmachine: (addons-214441) DBG | I0929 11:30:27.300615  595923 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/21654-591397/.minikube
	I0929 11:30:27.300970  595895 main.go:141] libmachine: (addons-214441) Downloading /home/jenkins/minikube-integration/21654-591397/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/21654-591397/.minikube/cache/iso/amd64/minikube-v1.37.0-1758198818-20370-amd64.iso...
	I0929 11:30:27.567829  595895 main.go:141] libmachine: (addons-214441) DBG | I0929 11:30:27.567633  595923 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/21654-591397/.minikube/machines/addons-214441/id_rsa...
	I0929 11:30:27.812384  595895 main.go:141] libmachine: (addons-214441) DBG | I0929 11:30:27.812174  595923 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/21654-591397/.minikube/machines/addons-214441/addons-214441.rawdisk...
	I0929 11:30:27.812428  595895 main.go:141] libmachine: (addons-214441) DBG | Writing magic tar header
	I0929 11:30:27.812454  595895 main.go:141] libmachine: (addons-214441) DBG | Writing SSH key tar header
	I0929 11:30:27.812465  595895 main.go:141] libmachine: (addons-214441) DBG | I0929 11:30:27.812330  595923 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/21654-591397/.minikube/machines/addons-214441 ...
	I0929 11:30:27.812480  595895 main.go:141] libmachine: (addons-214441) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21654-591397/.minikube/machines/addons-214441
	I0929 11:30:27.812548  595895 main.go:141] libmachine: (addons-214441) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21654-591397/.minikube/machines
	I0929 11:30:27.812584  595895 main.go:141] libmachine: (addons-214441) setting executable bit set on /home/jenkins/minikube-integration/21654-591397/.minikube/machines/addons-214441 (perms=drwx------)
	I0929 11:30:27.812594  595895 main.go:141] libmachine: (addons-214441) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21654-591397/.minikube
	I0929 11:30:27.812609  595895 main.go:141] libmachine: (addons-214441) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21654-591397
	I0929 11:30:27.812617  595895 main.go:141] libmachine: (addons-214441) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0929 11:30:27.812625  595895 main.go:141] libmachine: (addons-214441) DBG | checking permissions on dir: /home/jenkins
	I0929 11:30:27.812632  595895 main.go:141] libmachine: (addons-214441) DBG | checking permissions on dir: /home
	I0929 11:30:27.812642  595895 main.go:141] libmachine: (addons-214441) DBG | skipping /home - not owner
	I0929 11:30:27.812734  595895 main.go:141] libmachine: (addons-214441) setting executable bit set on /home/jenkins/minikube-integration/21654-591397/.minikube/machines (perms=drwxr-xr-x)
	I0929 11:30:27.812784  595895 main.go:141] libmachine: (addons-214441) setting executable bit set on /home/jenkins/minikube-integration/21654-591397/.minikube (perms=drwxr-xr-x)
	I0929 11:30:27.812829  595895 main.go:141] libmachine: (addons-214441) setting executable bit set on /home/jenkins/minikube-integration/21654-591397 (perms=drwxrwxr-x)
	I0929 11:30:27.812851  595895 main.go:141] libmachine: (addons-214441) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0929 11:30:27.812866  595895 main.go:141] libmachine: (addons-214441) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0929 11:30:27.812895  595895 main.go:141] libmachine: (addons-214441) defining domain...
	I0929 11:30:27.814169  595895 main.go:141] libmachine: (addons-214441) defining domain using XML: 
	I0929 11:30:27.814189  595895 main.go:141] libmachine: (addons-214441) <domain type='kvm'>
	I0929 11:30:27.814197  595895 main.go:141] libmachine: (addons-214441)   <name>addons-214441</name>
	I0929 11:30:27.814204  595895 main.go:141] libmachine: (addons-214441)   <memory unit='MiB'>4096</memory>
	I0929 11:30:27.814211  595895 main.go:141] libmachine: (addons-214441)   <vcpu>2</vcpu>
	I0929 11:30:27.814217  595895 main.go:141] libmachine: (addons-214441)   <features>
	I0929 11:30:27.814224  595895 main.go:141] libmachine: (addons-214441)     <acpi/>
	I0929 11:30:27.814236  595895 main.go:141] libmachine: (addons-214441)     <apic/>
	I0929 11:30:27.814260  595895 main.go:141] libmachine: (addons-214441)     <pae/>
	I0929 11:30:27.814274  595895 main.go:141] libmachine: (addons-214441)   </features>
	I0929 11:30:27.814283  595895 main.go:141] libmachine: (addons-214441)   <cpu mode='host-passthrough'>
	I0929 11:30:27.814290  595895 main.go:141] libmachine: (addons-214441)   </cpu>
	I0929 11:30:27.814300  595895 main.go:141] libmachine: (addons-214441)   <os>
	I0929 11:30:27.814310  595895 main.go:141] libmachine: (addons-214441)     <type>hvm</type>
	I0929 11:30:27.814319  595895 main.go:141] libmachine: (addons-214441)     <boot dev='cdrom'/>
	I0929 11:30:27.814323  595895 main.go:141] libmachine: (addons-214441)     <boot dev='hd'/>
	I0929 11:30:27.814331  595895 main.go:141] libmachine: (addons-214441)     <bootmenu enable='no'/>
	I0929 11:30:27.814337  595895 main.go:141] libmachine: (addons-214441)   </os>
	I0929 11:30:27.814342  595895 main.go:141] libmachine: (addons-214441)   <devices>
	I0929 11:30:27.814352  595895 main.go:141] libmachine: (addons-214441)     <disk type='file' device='cdrom'>
	I0929 11:30:27.814381  595895 main.go:141] libmachine: (addons-214441)       <source file='/home/jenkins/minikube-integration/21654-591397/.minikube/machines/addons-214441/boot2docker.iso'/>
	I0929 11:30:27.814393  595895 main.go:141] libmachine: (addons-214441)       <target dev='hdc' bus='scsi'/>
	I0929 11:30:27.814438  595895 main.go:141] libmachine: (addons-214441)       <readonly/>
	I0929 11:30:27.814469  595895 main.go:141] libmachine: (addons-214441)     </disk>
	I0929 11:30:27.814485  595895 main.go:141] libmachine: (addons-214441)     <disk type='file' device='disk'>
	I0929 11:30:27.814501  595895 main.go:141] libmachine: (addons-214441)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0929 11:30:27.814519  595895 main.go:141] libmachine: (addons-214441)       <source file='/home/jenkins/minikube-integration/21654-591397/.minikube/machines/addons-214441/addons-214441.rawdisk'/>
	I0929 11:30:27.814537  595895 main.go:141] libmachine: (addons-214441)       <target dev='hda' bus='virtio'/>
	I0929 11:30:27.814551  595895 main.go:141] libmachine: (addons-214441)     </disk>
	I0929 11:30:27.814564  595895 main.go:141] libmachine: (addons-214441)     <interface type='network'>
	I0929 11:30:27.814577  595895 main.go:141] libmachine: (addons-214441)       <source network='mk-addons-214441'/>
	I0929 11:30:27.814587  595895 main.go:141] libmachine: (addons-214441)       <model type='virtio'/>
	I0929 11:30:27.814598  595895 main.go:141] libmachine: (addons-214441)     </interface>
	I0929 11:30:27.814608  595895 main.go:141] libmachine: (addons-214441)     <interface type='network'>
	I0929 11:30:27.814616  595895 main.go:141] libmachine: (addons-214441)       <source network='default'/>
	I0929 11:30:27.814644  595895 main.go:141] libmachine: (addons-214441)       <model type='virtio'/>
	I0929 11:30:27.814658  595895 main.go:141] libmachine: (addons-214441)     </interface>
	I0929 11:30:27.814670  595895 main.go:141] libmachine: (addons-214441)     <serial type='pty'>
	I0929 11:30:27.814681  595895 main.go:141] libmachine: (addons-214441)       <target port='0'/>
	I0929 11:30:27.814692  595895 main.go:141] libmachine: (addons-214441)     </serial>
	I0929 11:30:27.814707  595895 main.go:141] libmachine: (addons-214441)     <console type='pty'>
	I0929 11:30:27.814717  595895 main.go:141] libmachine: (addons-214441)       <target type='serial' port='0'/>
	I0929 11:30:27.814725  595895 main.go:141] libmachine: (addons-214441)     </console>
	I0929 11:30:27.814732  595895 main.go:141] libmachine: (addons-214441)     <rng model='virtio'>
	I0929 11:30:27.814741  595895 main.go:141] libmachine: (addons-214441)       <backend model='random'>/dev/random</backend>
	I0929 11:30:27.814750  595895 main.go:141] libmachine: (addons-214441)     </rng>
	I0929 11:30:27.814759  595895 main.go:141] libmachine: (addons-214441)   </devices>
	I0929 11:30:27.814768  595895 main.go:141] libmachine: (addons-214441) </domain>
	I0929 11:30:27.814781  595895 main.go:141] libmachine: (addons-214441) 
	I0929 11:30:27.822484  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:b8:70:d1 in network default
	I0929 11:30:27.823310  595895 main.go:141] libmachine: (addons-214441) starting domain...
	I0929 11:30:27.823336  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:27.823353  595895 main.go:141] libmachine: (addons-214441) ensuring networks are active...
	I0929 11:30:27.824165  595895 main.go:141] libmachine: (addons-214441) Ensuring network default is active
	I0929 11:30:27.824600  595895 main.go:141] libmachine: (addons-214441) Ensuring network mk-addons-214441 is active
	I0929 11:30:27.825327  595895 main.go:141] libmachine: (addons-214441) getting domain XML...
	I0929 11:30:27.826485  595895 main.go:141] libmachine: (addons-214441) DBG | starting domain XML:
	I0929 11:30:27.826497  595895 main.go:141] libmachine: (addons-214441) DBG | <domain type='kvm'>
	I0929 11:30:27.826534  595895 main.go:141] libmachine: (addons-214441) DBG |   <name>addons-214441</name>
	I0929 11:30:27.826556  595895 main.go:141] libmachine: (addons-214441) DBG |   <uuid>44179717-3988-47cd-b8d8-61dffe58e059</uuid>
	I0929 11:30:27.826564  595895 main.go:141] libmachine: (addons-214441) DBG |   <memory unit='KiB'>4194304</memory>
	I0929 11:30:27.826573  595895 main.go:141] libmachine: (addons-214441) DBG |   <currentMemory unit='KiB'>4194304</currentMemory>
	I0929 11:30:27.826583  595895 main.go:141] libmachine: (addons-214441) DBG |   <vcpu placement='static'>2</vcpu>
	I0929 11:30:27.826594  595895 main.go:141] libmachine: (addons-214441) DBG |   <os>
	I0929 11:30:27.826603  595895 main.go:141] libmachine: (addons-214441) DBG |     <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	I0929 11:30:27.826611  595895 main.go:141] libmachine: (addons-214441) DBG |     <boot dev='cdrom'/>
	I0929 11:30:27.826619  595895 main.go:141] libmachine: (addons-214441) DBG |     <boot dev='hd'/>
	I0929 11:30:27.826627  595895 main.go:141] libmachine: (addons-214441) DBG |     <bootmenu enable='no'/>
	I0929 11:30:27.826636  595895 main.go:141] libmachine: (addons-214441) DBG |   </os>
	I0929 11:30:27.826643  595895 main.go:141] libmachine: (addons-214441) DBG |   <features>
	I0929 11:30:27.826652  595895 main.go:141] libmachine: (addons-214441) DBG |     <acpi/>
	I0929 11:30:27.826658  595895 main.go:141] libmachine: (addons-214441) DBG |     <apic/>
	I0929 11:30:27.826666  595895 main.go:141] libmachine: (addons-214441) DBG |     <pae/>
	I0929 11:30:27.826670  595895 main.go:141] libmachine: (addons-214441) DBG |   </features>
	I0929 11:30:27.826676  595895 main.go:141] libmachine: (addons-214441) DBG |   <cpu mode='host-passthrough' check='none' migratable='on'/>
	I0929 11:30:27.826680  595895 main.go:141] libmachine: (addons-214441) DBG |   <clock offset='utc'/>
	I0929 11:30:27.826712  595895 main.go:141] libmachine: (addons-214441) DBG |   <on_poweroff>destroy</on_poweroff>
	I0929 11:30:27.826730  595895 main.go:141] libmachine: (addons-214441) DBG |   <on_reboot>restart</on_reboot>
	I0929 11:30:27.826740  595895 main.go:141] libmachine: (addons-214441) DBG |   <on_crash>destroy</on_crash>
	I0929 11:30:27.826748  595895 main.go:141] libmachine: (addons-214441) DBG |   <devices>
	I0929 11:30:27.826760  595895 main.go:141] libmachine: (addons-214441) DBG |     <emulator>/usr/bin/qemu-system-x86_64</emulator>
	I0929 11:30:27.826771  595895 main.go:141] libmachine: (addons-214441) DBG |     <disk type='file' device='cdrom'>
	I0929 11:30:27.826782  595895 main.go:141] libmachine: (addons-214441) DBG |       <driver name='qemu' type='raw'/>
	I0929 11:30:27.826804  595895 main.go:141] libmachine: (addons-214441) DBG |       <source file='/home/jenkins/minikube-integration/21654-591397/.minikube/machines/addons-214441/boot2docker.iso'/>
	I0929 11:30:27.826817  595895 main.go:141] libmachine: (addons-214441) DBG |       <target dev='hdc' bus='scsi'/>
	I0929 11:30:27.826828  595895 main.go:141] libmachine: (addons-214441) DBG |       <readonly/>
	I0929 11:30:27.826842  595895 main.go:141] libmachine: (addons-214441) DBG |       <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	I0929 11:30:27.826853  595895 main.go:141] libmachine: (addons-214441) DBG |     </disk>
	I0929 11:30:27.826863  595895 main.go:141] libmachine: (addons-214441) DBG |     <disk type='file' device='disk'>
	I0929 11:30:27.826884  595895 main.go:141] libmachine: (addons-214441) DBG |       <driver name='qemu' type='raw' io='threads'/>
	I0929 11:30:27.826906  595895 main.go:141] libmachine: (addons-214441) DBG |       <source file='/home/jenkins/minikube-integration/21654-591397/.minikube/machines/addons-214441/addons-214441.rawdisk'/>
	I0929 11:30:27.826922  595895 main.go:141] libmachine: (addons-214441) DBG |       <target dev='hda' bus='virtio'/>
	I0929 11:30:27.826937  595895 main.go:141] libmachine: (addons-214441) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	I0929 11:30:27.826947  595895 main.go:141] libmachine: (addons-214441) DBG |     </disk>
	I0929 11:30:27.826959  595895 main.go:141] libmachine: (addons-214441) DBG |     <controller type='usb' index='0' model='piix3-uhci'>
	I0929 11:30:27.826972  595895 main.go:141] libmachine: (addons-214441) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	I0929 11:30:27.826984  595895 main.go:141] libmachine: (addons-214441) DBG |     </controller>
	I0929 11:30:27.827000  595895 main.go:141] libmachine: (addons-214441) DBG |     <controller type='pci' index='0' model='pci-root'/>
	I0929 11:30:27.827014  595895 main.go:141] libmachine: (addons-214441) DBG |     <controller type='scsi' index='0' model='lsilogic'>
	I0929 11:30:27.827028  595895 main.go:141] libmachine: (addons-214441) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	I0929 11:30:27.827039  595895 main.go:141] libmachine: (addons-214441) DBG |     </controller>
	I0929 11:30:27.827046  595895 main.go:141] libmachine: (addons-214441) DBG |     <interface type='network'>
	I0929 11:30:27.827053  595895 main.go:141] libmachine: (addons-214441) DBG |       <mac address='52:54:00:98:9c:d8'/>
	I0929 11:30:27.827060  595895 main.go:141] libmachine: (addons-214441) DBG |       <source network='mk-addons-214441'/>
	I0929 11:30:27.827087  595895 main.go:141] libmachine: (addons-214441) DBG |       <model type='virtio'/>
	I0929 11:30:27.827120  595895 main.go:141] libmachine: (addons-214441) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	I0929 11:30:27.827133  595895 main.go:141] libmachine: (addons-214441) DBG |     </interface>
	I0929 11:30:27.827141  595895 main.go:141] libmachine: (addons-214441) DBG |     <interface type='network'>
	I0929 11:30:27.827146  595895 main.go:141] libmachine: (addons-214441) DBG |       <mac address='52:54:00:b8:70:d1'/>
	I0929 11:30:27.827154  595895 main.go:141] libmachine: (addons-214441) DBG |       <source network='default'/>
	I0929 11:30:27.827172  595895 main.go:141] libmachine: (addons-214441) DBG |       <model type='virtio'/>
	I0929 11:30:27.827197  595895 main.go:141] libmachine: (addons-214441) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	I0929 11:30:27.827208  595895 main.go:141] libmachine: (addons-214441) DBG |     </interface>
	I0929 11:30:27.827218  595895 main.go:141] libmachine: (addons-214441) DBG |     <serial type='pty'>
	I0929 11:30:27.827232  595895 main.go:141] libmachine: (addons-214441) DBG |       <target type='isa-serial' port='0'>
	I0929 11:30:27.827252  595895 main.go:141] libmachine: (addons-214441) DBG |         <model name='isa-serial'/>
	I0929 11:30:27.827267  595895 main.go:141] libmachine: (addons-214441) DBG |       </target>
	I0929 11:30:27.827295  595895 main.go:141] libmachine: (addons-214441) DBG |     </serial>
	I0929 11:30:27.827306  595895 main.go:141] libmachine: (addons-214441) DBG |     <console type='pty'>
	I0929 11:30:27.827316  595895 main.go:141] libmachine: (addons-214441) DBG |       <target type='serial' port='0'/>
	I0929 11:30:27.827327  595895 main.go:141] libmachine: (addons-214441) DBG |     </console>
	I0929 11:30:27.827337  595895 main.go:141] libmachine: (addons-214441) DBG |     <input type='mouse' bus='ps2'/>
	I0929 11:30:27.827353  595895 main.go:141] libmachine: (addons-214441) DBG |     <input type='keyboard' bus='ps2'/>
	I0929 11:30:27.827365  595895 main.go:141] libmachine: (addons-214441) DBG |     <audio id='1' type='none'/>
	I0929 11:30:27.827381  595895 main.go:141] libmachine: (addons-214441) DBG |     <memballoon model='virtio'>
	I0929 11:30:27.827396  595895 main.go:141] libmachine: (addons-214441) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	I0929 11:30:27.827407  595895 main.go:141] libmachine: (addons-214441) DBG |     </memballoon>
	I0929 11:30:27.827416  595895 main.go:141] libmachine: (addons-214441) DBG |     <rng model='virtio'>
	I0929 11:30:27.827462  595895 main.go:141] libmachine: (addons-214441) DBG |       <backend model='random'>/dev/random</backend>
	I0929 11:30:27.827477  595895 main.go:141] libmachine: (addons-214441) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	I0929 11:30:27.827484  595895 main.go:141] libmachine: (addons-214441) DBG |     </rng>
	I0929 11:30:27.827492  595895 main.go:141] libmachine: (addons-214441) DBG |   </devices>
	I0929 11:30:27.827507  595895 main.go:141] libmachine: (addons-214441) DBG | </domain>
	I0929 11:30:27.827523  595895 main.go:141] libmachine: (addons-214441) DBG | 
	I0929 11:30:29.153785  595895 main.go:141] libmachine: (addons-214441) waiting for domain to start...
	I0929 11:30:29.155338  595895 main.go:141] libmachine: (addons-214441) domain is now running
	I0929 11:30:29.155366  595895 main.go:141] libmachine: (addons-214441) waiting for IP...
	I0929 11:30:29.156233  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:29.156741  595895 main.go:141] libmachine: (addons-214441) DBG | no network interface addresses found for domain addons-214441 (source=lease)
	I0929 11:30:29.156768  595895 main.go:141] libmachine: (addons-214441) DBG | trying to list again with source=arp
	I0929 11:30:29.157097  595895 main.go:141] libmachine: (addons-214441) DBG | unable to find current IP address of domain addons-214441 in network mk-addons-214441 (interfaces detected: [])
	I0929 11:30:29.157173  595895 main.go:141] libmachine: (addons-214441) DBG | I0929 11:30:29.157084  595923 retry.go:31] will retry after 193.130078ms: waiting for domain to come up
	I0929 11:30:29.351641  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:29.352088  595895 main.go:141] libmachine: (addons-214441) DBG | no network interface addresses found for domain addons-214441 (source=lease)
	I0929 11:30:29.352131  595895 main.go:141] libmachine: (addons-214441) DBG | trying to list again with source=arp
	I0929 11:30:29.352401  595895 main.go:141] libmachine: (addons-214441) DBG | unable to find current IP address of domain addons-214441 in network mk-addons-214441 (interfaces detected: [])
	I0929 11:30:29.352453  595895 main.go:141] libmachine: (addons-214441) DBG | I0929 11:30:29.352389  595923 retry.go:31] will retry after 298.936458ms: waiting for domain to come up
	I0929 11:30:29.653209  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:29.653776  595895 main.go:141] libmachine: (addons-214441) DBG | no network interface addresses found for domain addons-214441 (source=lease)
	I0929 11:30:29.653812  595895 main.go:141] libmachine: (addons-214441) DBG | trying to list again with source=arp
	I0929 11:30:29.654092  595895 main.go:141] libmachine: (addons-214441) DBG | unable to find current IP address of domain addons-214441 in network mk-addons-214441 (interfaces detected: [])
	I0929 11:30:29.654145  595895 main.go:141] libmachine: (addons-214441) DBG | I0929 11:30:29.654057  595923 retry.go:31] will retry after 319.170448ms: waiting for domain to come up
	I0929 11:30:29.974953  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:29.975656  595895 main.go:141] libmachine: (addons-214441) DBG | no network interface addresses found for domain addons-214441 (source=lease)
	I0929 11:30:29.975697  595895 main.go:141] libmachine: (addons-214441) DBG | trying to list again with source=arp
	I0929 11:30:29.976026  595895 main.go:141] libmachine: (addons-214441) DBG | unable to find current IP address of domain addons-214441 in network mk-addons-214441 (interfaces detected: [])
	I0929 11:30:29.976053  595895 main.go:141] libmachine: (addons-214441) DBG | I0929 11:30:29.976008  595923 retry.go:31] will retry after 599.248845ms: waiting for domain to come up
	I0929 11:30:30.576933  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:30.577607  595895 main.go:141] libmachine: (addons-214441) DBG | no network interface addresses found for domain addons-214441 (source=lease)
	I0929 11:30:30.577638  595895 main.go:141] libmachine: (addons-214441) DBG | trying to list again with source=arp
	I0929 11:30:30.577976  595895 main.go:141] libmachine: (addons-214441) DBG | unable to find current IP address of domain addons-214441 in network mk-addons-214441 (interfaces detected: [])
	I0929 11:30:30.578001  595895 main.go:141] libmachine: (addons-214441) DBG | I0929 11:30:30.577944  595923 retry.go:31] will retry after 506.439756ms: waiting for domain to come up
	I0929 11:30:31.085911  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:31.086486  595895 main.go:141] libmachine: (addons-214441) DBG | no network interface addresses found for domain addons-214441 (source=lease)
	I0929 11:30:31.086516  595895 main.go:141] libmachine: (addons-214441) DBG | trying to list again with source=arp
	I0929 11:30:31.086838  595895 main.go:141] libmachine: (addons-214441) DBG | unable to find current IP address of domain addons-214441 in network mk-addons-214441 (interfaces detected: [])
	I0929 11:30:31.086901  595895 main.go:141] libmachine: (addons-214441) DBG | I0929 11:30:31.086827  595923 retry.go:31] will retry after 714.950089ms: waiting for domain to come up
	I0929 11:30:31.803913  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:31.804432  595895 main.go:141] libmachine: (addons-214441) DBG | no network interface addresses found for domain addons-214441 (source=lease)
	I0929 11:30:31.804465  595895 main.go:141] libmachine: (addons-214441) DBG | trying to list again with source=arp
	I0929 11:30:31.804799  595895 main.go:141] libmachine: (addons-214441) DBG | unable to find current IP address of domain addons-214441 in network mk-addons-214441 (interfaces detected: [])
	I0929 11:30:31.804835  595895 main.go:141] libmachine: (addons-214441) DBG | I0929 11:30:31.804762  595923 retry.go:31] will retry after 948.596157ms: waiting for domain to come up
	I0929 11:30:32.755226  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:32.755814  595895 main.go:141] libmachine: (addons-214441) DBG | no network interface addresses found for domain addons-214441 (source=lease)
	I0929 11:30:32.755837  595895 main.go:141] libmachine: (addons-214441) DBG | trying to list again with source=arp
	I0929 11:30:32.756159  595895 main.go:141] libmachine: (addons-214441) DBG | unable to find current IP address of domain addons-214441 in network mk-addons-214441 (interfaces detected: [])
	I0929 11:30:32.756191  595895 main.go:141] libmachine: (addons-214441) DBG | I0929 11:30:32.756135  595923 retry.go:31] will retry after 1.377051804s: waiting for domain to come up
	I0929 11:30:34.136012  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:34.136582  595895 main.go:141] libmachine: (addons-214441) DBG | no network interface addresses found for domain addons-214441 (source=lease)
	I0929 11:30:34.136605  595895 main.go:141] libmachine: (addons-214441) DBG | trying to list again with source=arp
	I0929 11:30:34.136880  595895 main.go:141] libmachine: (addons-214441) DBG | unable to find current IP address of domain addons-214441 in network mk-addons-214441 (interfaces detected: [])
	I0929 11:30:34.136912  595895 main.go:141] libmachine: (addons-214441) DBG | I0929 11:30:34.136849  595923 retry.go:31] will retry after 1.34696154s: waiting for domain to come up
	I0929 11:30:35.485739  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:35.486269  595895 main.go:141] libmachine: (addons-214441) DBG | no network interface addresses found for domain addons-214441 (source=lease)
	I0929 11:30:35.486292  595895 main.go:141] libmachine: (addons-214441) DBG | trying to list again with source=arp
	I0929 11:30:35.486548  595895 main.go:141] libmachine: (addons-214441) DBG | unable to find current IP address of domain addons-214441 in network mk-addons-214441 (interfaces detected: [])
	I0929 11:30:35.486587  595895 main.go:141] libmachine: (addons-214441) DBG | I0929 11:30:35.486521  595923 retry.go:31] will retry after 1.574508192s: waiting for domain to come up
	I0929 11:30:37.063528  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:37.064142  595895 main.go:141] libmachine: (addons-214441) DBG | no network interface addresses found for domain addons-214441 (source=lease)
	I0929 11:30:37.064170  595895 main.go:141] libmachine: (addons-214441) DBG | trying to list again with source=arp
	I0929 11:30:37.064559  595895 main.go:141] libmachine: (addons-214441) DBG | unable to find current IP address of domain addons-214441 in network mk-addons-214441 (interfaces detected: [])
	I0929 11:30:37.064594  595895 main.go:141] libmachine: (addons-214441) DBG | I0929 11:30:37.064489  595923 retry.go:31] will retry after 2.067291223s: waiting for domain to come up
	I0929 11:30:39.135405  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:39.135998  595895 main.go:141] libmachine: (addons-214441) DBG | no network interface addresses found for domain addons-214441 (source=lease)
	I0929 11:30:39.136030  595895 main.go:141] libmachine: (addons-214441) DBG | trying to list again with source=arp
	I0929 11:30:39.136354  595895 main.go:141] libmachine: (addons-214441) DBG | unable to find current IP address of domain addons-214441 in network mk-addons-214441 (interfaces detected: [])
	I0929 11:30:39.136412  595895 main.go:141] libmachine: (addons-214441) DBG | I0929 11:30:39.136338  595923 retry.go:31] will retry after 3.104602856s: waiting for domain to come up
	I0929 11:30:42.242410  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:42.242939  595895 main.go:141] libmachine: (addons-214441) DBG | no network interface addresses found for domain addons-214441 (source=lease)
	I0929 11:30:42.242965  595895 main.go:141] libmachine: (addons-214441) DBG | trying to list again with source=arp
	I0929 11:30:42.243288  595895 main.go:141] libmachine: (addons-214441) DBG | unable to find current IP address of domain addons-214441 in network mk-addons-214441 (interfaces detected: [])
	I0929 11:30:42.243344  595895 main.go:141] libmachine: (addons-214441) DBG | I0929 11:30:42.243280  595923 retry.go:31] will retry after 4.150705767s: waiting for domain to come up
	I0929 11:30:46.398779  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:46.399347  595895 main.go:141] libmachine: (addons-214441) found domain IP: 192.168.39.76
	I0929 11:30:46.399374  595895 main.go:141] libmachine: (addons-214441) reserving static IP address...
	I0929 11:30:46.399388  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has current primary IP address 192.168.39.76 and MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:46.399901  595895 main.go:141] libmachine: (addons-214441) DBG | unable to find host DHCP lease matching {name: "addons-214441", mac: "52:54:00:98:9c:d8", ip: "192.168.39.76"} in network mk-addons-214441
	I0929 11:30:46.587177  595895 main.go:141] libmachine: (addons-214441) DBG | Getting to WaitForSSH function...
	I0929 11:30:46.587215  595895 main.go:141] libmachine: (addons-214441) reserved static IP address 192.168.39.76 for domain addons-214441
	I0929 11:30:46.587228  595895 main.go:141] libmachine: (addons-214441) waiting for SSH...
	I0929 11:30:46.590179  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:46.590588  595895 main.go:141] libmachine: (addons-214441) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:9c:d8", ip: ""} in network mk-addons-214441: {Iface:virbr1 ExpiryTime:2025-09-29 12:30:43 +0000 UTC Type:0 Mac:52:54:00:98:9c:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:minikube Clientid:01:52:54:00:98:9c:d8}
	I0929 11:30:46.590626  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined IP address 192.168.39.76 and MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:46.590750  595895 main.go:141] libmachine: (addons-214441) DBG | Using SSH client type: external
	I0929 11:30:46.590791  595895 main.go:141] libmachine: (addons-214441) DBG | Using SSH private key: /home/jenkins/minikube-integration/21654-591397/.minikube/machines/addons-214441/id_rsa (-rw-------)
	I0929 11:30:46.590840  595895 main.go:141] libmachine: (addons-214441) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.76 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21654-591397/.minikube/machines/addons-214441/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0929 11:30:46.590868  595895 main.go:141] libmachine: (addons-214441) DBG | About to run SSH command:
	I0929 11:30:46.590883  595895 main.go:141] libmachine: (addons-214441) DBG | exit 0
	I0929 11:30:46.729877  595895 main.go:141] libmachine: (addons-214441) DBG | SSH cmd err, output: <nil>: 
	I0929 11:30:46.730171  595895 main.go:141] libmachine: (addons-214441) domain creation complete
	I0929 11:30:46.730534  595895 main.go:141] libmachine: (addons-214441) Calling .GetConfigRaw
	I0929 11:30:46.731196  595895 main.go:141] libmachine: (addons-214441) Calling .DriverName
	I0929 11:30:46.731410  595895 main.go:141] libmachine: (addons-214441) Calling .DriverName
	I0929 11:30:46.731600  595895 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0929 11:30:46.731623  595895 main.go:141] libmachine: (addons-214441) Calling .GetState
	I0929 11:30:46.732882  595895 main.go:141] libmachine: Detecting operating system of created instance...
	I0929 11:30:46.732897  595895 main.go:141] libmachine: Waiting for SSH to be available...
	I0929 11:30:46.732902  595895 main.go:141] libmachine: Getting to WaitForSSH function...
	I0929 11:30:46.732908  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHHostname
	I0929 11:30:46.735685  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:46.736210  595895 main.go:141] libmachine: (addons-214441) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:9c:d8", ip: ""} in network mk-addons-214441: {Iface:virbr1 ExpiryTime:2025-09-29 12:30:43 +0000 UTC Type:0 Mac:52:54:00:98:9c:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:addons-214441 Clientid:01:52:54:00:98:9c:d8}
	I0929 11:30:46.736238  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined IP address 192.168.39.76 and MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:46.736397  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHPort
	I0929 11:30:46.736652  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHKeyPath
	I0929 11:30:46.736854  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHKeyPath
	I0929 11:30:46.736998  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHUsername
	I0929 11:30:46.737156  595895 main.go:141] libmachine: Using SSH client type: native
	I0929 11:30:46.737392  595895 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.39.76 22 <nil> <nil>}
	I0929 11:30:46.737403  595895 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0929 11:30:46.844278  595895 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0929 11:30:46.844312  595895 main.go:141] libmachine: Detecting the provisioner...
	I0929 11:30:46.844324  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHHostname
	I0929 11:30:46.848224  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:46.849228  595895 main.go:141] libmachine: (addons-214441) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:9c:d8", ip: ""} in network mk-addons-214441: {Iface:virbr1 ExpiryTime:2025-09-29 12:30:43 +0000 UTC Type:0 Mac:52:54:00:98:9c:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:addons-214441 Clientid:01:52:54:00:98:9c:d8}
	I0929 11:30:46.849264  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined IP address 192.168.39.76 and MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:46.849457  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHPort
	I0929 11:30:46.849706  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHKeyPath
	I0929 11:30:46.849884  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHKeyPath
	I0929 11:30:46.850038  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHUsername
	I0929 11:30:46.850227  595895 main.go:141] libmachine: Using SSH client type: native
	I0929 11:30:46.850481  595895 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.39.76 22 <nil> <nil>}
	I0929 11:30:46.850494  595895 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0929 11:30:46.959386  595895 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2025.02-dirty
	ID=buildroot
	VERSION_ID=2025.02
	PRETTY_NAME="Buildroot 2025.02"
	
	I0929 11:30:46.959537  595895 main.go:141] libmachine: found compatible host: buildroot
	I0929 11:30:46.959560  595895 main.go:141] libmachine: Provisioning with buildroot...
	I0929 11:30:46.959572  595895 main.go:141] libmachine: (addons-214441) Calling .GetMachineName
	I0929 11:30:46.959897  595895 buildroot.go:166] provisioning hostname "addons-214441"
	I0929 11:30:46.959920  595895 main.go:141] libmachine: (addons-214441) Calling .GetMachineName
	I0929 11:30:46.960158  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHHostname
	I0929 11:30:46.963429  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:46.963851  595895 main.go:141] libmachine: (addons-214441) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:9c:d8", ip: ""} in network mk-addons-214441: {Iface:virbr1 ExpiryTime:2025-09-29 12:30:43 +0000 UTC Type:0 Mac:52:54:00:98:9c:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:addons-214441 Clientid:01:52:54:00:98:9c:d8}
	I0929 11:30:46.963892  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined IP address 192.168.39.76 and MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:46.964187  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHPort
	I0929 11:30:46.964389  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHKeyPath
	I0929 11:30:46.964590  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHKeyPath
	I0929 11:30:46.964750  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHUsername
	I0929 11:30:46.964942  595895 main.go:141] libmachine: Using SSH client type: native
	I0929 11:30:46.965188  595895 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.39.76 22 <nil> <nil>}
	I0929 11:30:46.965202  595895 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-214441 && echo "addons-214441" | sudo tee /etc/hostname
	I0929 11:30:47.092132  595895 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-214441
	
	I0929 11:30:47.092159  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHHostname
	I0929 11:30:47.095605  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:47.096136  595895 main.go:141] libmachine: (addons-214441) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:9c:d8", ip: ""} in network mk-addons-214441: {Iface:virbr1 ExpiryTime:2025-09-29 12:30:43 +0000 UTC Type:0 Mac:52:54:00:98:9c:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:addons-214441 Clientid:01:52:54:00:98:9c:d8}
	I0929 11:30:47.096169  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined IP address 192.168.39.76 and MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:47.096340  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHPort
	I0929 11:30:47.096555  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHKeyPath
	I0929 11:30:47.096747  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHKeyPath
	I0929 11:30:47.096902  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHUsername
	I0929 11:30:47.097123  595895 main.go:141] libmachine: Using SSH client type: native
	I0929 11:30:47.097351  595895 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.39.76 22 <nil> <nil>}
	I0929 11:30:47.097369  595895 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-214441' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-214441/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-214441' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0929 11:30:47.216048  595895 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0929 11:30:47.216081  595895 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21654-591397/.minikube CaCertPath:/home/jenkins/minikube-integration/21654-591397/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21654-591397/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21654-591397/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21654-591397/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21654-591397/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21654-591397/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21654-591397/.minikube}
	I0929 11:30:47.216160  595895 buildroot.go:174] setting up certificates
	I0929 11:30:47.216176  595895 provision.go:84] configureAuth start
	I0929 11:30:47.216187  595895 main.go:141] libmachine: (addons-214441) Calling .GetMachineName
	I0929 11:30:47.216551  595895 main.go:141] libmachine: (addons-214441) Calling .GetIP
	I0929 11:30:47.219822  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:47.220206  595895 main.go:141] libmachine: (addons-214441) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:9c:d8", ip: ""} in network mk-addons-214441: {Iface:virbr1 ExpiryTime:2025-09-29 12:30:43 +0000 UTC Type:0 Mac:52:54:00:98:9c:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:addons-214441 Clientid:01:52:54:00:98:9c:d8}
	I0929 11:30:47.220241  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined IP address 192.168.39.76 and MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:47.220424  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHHostname
	I0929 11:30:47.222925  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:47.223320  595895 main.go:141] libmachine: (addons-214441) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:9c:d8", ip: ""} in network mk-addons-214441: {Iface:virbr1 ExpiryTime:2025-09-29 12:30:43 +0000 UTC Type:0 Mac:52:54:00:98:9c:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:addons-214441 Clientid:01:52:54:00:98:9c:d8}
	I0929 11:30:47.223351  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined IP address 192.168.39.76 and MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:47.223603  595895 provision.go:143] copyHostCerts
	I0929 11:30:47.223674  595895 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21654-591397/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21654-591397/.minikube/cert.pem (1123 bytes)
	I0929 11:30:47.223815  595895 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21654-591397/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21654-591397/.minikube/key.pem (1675 bytes)
	I0929 11:30:47.223908  595895 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21654-591397/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21654-591397/.minikube/ca.pem (1082 bytes)
	I0929 11:30:47.223987  595895 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21654-591397/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21654-591397/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21654-591397/.minikube/certs/ca-key.pem org=jenkins.addons-214441 san=[127.0.0.1 192.168.39.76 addons-214441 localhost minikube]
	I0929 11:30:47.541100  595895 provision.go:177] copyRemoteCerts
	I0929 11:30:47.541199  595895 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0929 11:30:47.541238  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHHostname
	I0929 11:30:47.544486  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:47.544940  595895 main.go:141] libmachine: (addons-214441) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:9c:d8", ip: ""} in network mk-addons-214441: {Iface:virbr1 ExpiryTime:2025-09-29 12:30:43 +0000 UTC Type:0 Mac:52:54:00:98:9c:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:addons-214441 Clientid:01:52:54:00:98:9c:d8}
	I0929 11:30:47.545024  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined IP address 192.168.39.76 and MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:47.545286  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHPort
	I0929 11:30:47.545574  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHKeyPath
	I0929 11:30:47.545766  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHUsername
	I0929 11:30:47.545940  595895 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21654-591397/.minikube/machines/addons-214441/id_rsa Username:docker}
	I0929 11:30:47.632441  595895 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21654-591397/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0929 11:30:47.665928  595895 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21654-591397/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0929 11:30:47.699464  595895 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21654-591397/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0929 11:30:47.731874  595895 provision.go:87] duration metric: took 515.680125ms to configureAuth
	I0929 11:30:47.731904  595895 buildroot.go:189] setting minikube options for container-runtime
	I0929 11:30:47.732120  595895 config.go:182] Loaded profile config "addons-214441": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0929 11:30:47.732187  595895 main.go:141] libmachine: (addons-214441) Calling .DriverName
	I0929 11:30:47.732484  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHHostname
	I0929 11:30:47.735606  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:47.736098  595895 main.go:141] libmachine: (addons-214441) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:9c:d8", ip: ""} in network mk-addons-214441: {Iface:virbr1 ExpiryTime:2025-09-29 12:30:43 +0000 UTC Type:0 Mac:52:54:00:98:9c:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:addons-214441 Clientid:01:52:54:00:98:9c:d8}
	I0929 11:30:47.736147  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined IP address 192.168.39.76 and MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:47.736408  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHPort
	I0929 11:30:47.736676  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHKeyPath
	I0929 11:30:47.736876  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHKeyPath
	I0929 11:30:47.737026  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHUsername
	I0929 11:30:47.737286  595895 main.go:141] libmachine: Using SSH client type: native
	I0929 11:30:47.737503  595895 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.39.76 22 <nil> <nil>}
	I0929 11:30:47.737522  595895 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0929 11:30:47.845243  595895 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0929 11:30:47.845278  595895 buildroot.go:70] root file system type: tmpfs
	I0929 11:30:47.845464  595895 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0929 11:30:47.845493  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHHostname
	I0929 11:30:47.848685  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:47.849080  595895 main.go:141] libmachine: (addons-214441) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:9c:d8", ip: ""} in network mk-addons-214441: {Iface:virbr1 ExpiryTime:2025-09-29 12:30:43 +0000 UTC Type:0 Mac:52:54:00:98:9c:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:addons-214441 Clientid:01:52:54:00:98:9c:d8}
	I0929 11:30:47.849125  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined IP address 192.168.39.76 and MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:47.849333  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHPort
	I0929 11:30:47.849561  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHKeyPath
	I0929 11:30:47.849749  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHKeyPath
	I0929 11:30:47.849921  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHUsername
	I0929 11:30:47.850156  595895 main.go:141] libmachine: Using SSH client type: native
	I0929 11:30:47.850438  595895 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.39.76 22 <nil> <nil>}
	I0929 11:30:47.850513  595895 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target minikube-automount.service nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	[Service]
	Type=notify
	Restart=always
	
	
	
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0929 11:30:47.980841  595895 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target minikube-automount.service nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	[Service]
	Type=notify
	Restart=always
	
	
	
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I0929 11:30:47.980885  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHHostname
	I0929 11:30:47.984021  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:47.984467  595895 main.go:141] libmachine: (addons-214441) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:9c:d8", ip: ""} in network mk-addons-214441: {Iface:virbr1 ExpiryTime:2025-09-29 12:30:43 +0000 UTC Type:0 Mac:52:54:00:98:9c:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:addons-214441 Clientid:01:52:54:00:98:9c:d8}
	I0929 11:30:47.984505  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined IP address 192.168.39.76 and MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:47.984746  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHPort
	I0929 11:30:47.984964  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHKeyPath
	I0929 11:30:47.985145  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHKeyPath
	I0929 11:30:47.985345  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHUsername
	I0929 11:30:47.985533  595895 main.go:141] libmachine: Using SSH client type: native
	I0929 11:30:47.985753  595895 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.39.76 22 <nil> <nil>}
	I0929 11:30:47.985769  595895 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0929 11:30:48.944806  595895 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink '/etc/systemd/system/multi-user.target.wants/docker.service' → '/usr/lib/systemd/system/docker.service'.
	
	I0929 11:30:48.944837  595895 main.go:141] libmachine: Checking connection to Docker...
	I0929 11:30:48.944847  595895 main.go:141] libmachine: (addons-214441) Calling .GetURL
	I0929 11:30:48.946423  595895 main.go:141] libmachine: (addons-214441) DBG | using libvirt version 8000000
	I0929 11:30:48.949334  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:48.949705  595895 main.go:141] libmachine: (addons-214441) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:9c:d8", ip: ""} in network mk-addons-214441: {Iface:virbr1 ExpiryTime:2025-09-29 12:30:43 +0000 UTC Type:0 Mac:52:54:00:98:9c:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:addons-214441 Clientid:01:52:54:00:98:9c:d8}
	I0929 11:30:48.949727  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined IP address 192.168.39.76 and MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:48.949905  595895 main.go:141] libmachine: Docker is up and running!
	I0929 11:30:48.949918  595895 main.go:141] libmachine: Reticulating splines...
	I0929 11:30:48.949926  595895 client.go:171] duration metric: took 22.382562322s to LocalClient.Create
	I0929 11:30:48.949961  595895 start.go:167] duration metric: took 22.382646372s to libmachine.API.Create "addons-214441"
	I0929 11:30:48.949977  595895 start.go:293] postStartSetup for "addons-214441" (driver="kvm2")
	I0929 11:30:48.949995  595895 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0929 11:30:48.950016  595895 main.go:141] libmachine: (addons-214441) Calling .DriverName
	I0929 11:30:48.950285  595895 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0929 11:30:48.950309  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHHostname
	I0929 11:30:48.952588  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:48.952941  595895 main.go:141] libmachine: (addons-214441) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:9c:d8", ip: ""} in network mk-addons-214441: {Iface:virbr1 ExpiryTime:2025-09-29 12:30:43 +0000 UTC Type:0 Mac:52:54:00:98:9c:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:addons-214441 Clientid:01:52:54:00:98:9c:d8}
	I0929 11:30:48.952973  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined IP address 192.168.39.76 and MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:48.953140  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHPort
	I0929 11:30:48.953358  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHKeyPath
	I0929 11:30:48.953522  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHUsername
	I0929 11:30:48.953678  595895 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21654-591397/.minikube/machines/addons-214441/id_rsa Username:docker}
	I0929 11:30:49.038834  595895 ssh_runner.go:195] Run: cat /etc/os-release
	I0929 11:30:49.044530  595895 info.go:137] Remote host: Buildroot 2025.02
	I0929 11:30:49.044562  595895 filesync.go:126] Scanning /home/jenkins/minikube-integration/21654-591397/.minikube/addons for local assets ...
	I0929 11:30:49.044653  595895 filesync.go:126] Scanning /home/jenkins/minikube-integration/21654-591397/.minikube/files for local assets ...
	I0929 11:30:49.044700  595895 start.go:296] duration metric: took 94.715435ms for postStartSetup
	I0929 11:30:49.044748  595895 main.go:141] libmachine: (addons-214441) Calling .GetConfigRaw
	I0929 11:30:49.045427  595895 main.go:141] libmachine: (addons-214441) Calling .GetIP
	I0929 11:30:49.048440  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:49.048801  595895 main.go:141] libmachine: (addons-214441) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:9c:d8", ip: ""} in network mk-addons-214441: {Iface:virbr1 ExpiryTime:2025-09-29 12:30:43 +0000 UTC Type:0 Mac:52:54:00:98:9c:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:addons-214441 Clientid:01:52:54:00:98:9c:d8}
	I0929 11:30:49.048825  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined IP address 192.168.39.76 and MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:49.049194  595895 profile.go:143] Saving config to /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/addons-214441/config.json ...
	I0929 11:30:49.049405  595895 start.go:128] duration metric: took 22.499712752s to createHost
	I0929 11:30:49.049432  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHHostname
	I0929 11:30:49.052122  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:49.052625  595895 main.go:141] libmachine: (addons-214441) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:9c:d8", ip: ""} in network mk-addons-214441: {Iface:virbr1 ExpiryTime:2025-09-29 12:30:43 +0000 UTC Type:0 Mac:52:54:00:98:9c:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:addons-214441 Clientid:01:52:54:00:98:9c:d8}
	I0929 11:30:49.052654  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined IP address 192.168.39.76 and MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:49.052915  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHPort
	I0929 11:30:49.053180  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHKeyPath
	I0929 11:30:49.053373  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHKeyPath
	I0929 11:30:49.053538  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHUsername
	I0929 11:30:49.053724  595895 main.go:141] libmachine: Using SSH client type: native
	I0929 11:30:49.053929  595895 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.39.76 22 <nil> <nil>}
	I0929 11:30:49.053940  595895 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0929 11:30:49.163416  595895 main.go:141] libmachine: SSH cmd err, output: <nil>: 1759145449.126116077
	
	I0929 11:30:49.163441  595895 fix.go:216] guest clock: 1759145449.126116077
	I0929 11:30:49.163449  595895 fix.go:229] Guest: 2025-09-29 11:30:49.126116077 +0000 UTC Remote: 2025-09-29 11:30:49.049418276 +0000 UTC m=+22.624163516 (delta=76.697801ms)
	I0929 11:30:49.163493  595895 fix.go:200] guest clock delta is within tolerance: 76.697801ms
	I0929 11:30:49.163499  595895 start.go:83] releasing machines lock for "addons-214441", held for 22.613874794s
	I0929 11:30:49.163528  595895 main.go:141] libmachine: (addons-214441) Calling .DriverName
	I0929 11:30:49.163838  595895 main.go:141] libmachine: (addons-214441) Calling .GetIP
	I0929 11:30:49.166822  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:49.167209  595895 main.go:141] libmachine: (addons-214441) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:9c:d8", ip: ""} in network mk-addons-214441: {Iface:virbr1 ExpiryTime:2025-09-29 12:30:43 +0000 UTC Type:0 Mac:52:54:00:98:9c:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:addons-214441 Clientid:01:52:54:00:98:9c:d8}
	I0929 11:30:49.167249  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined IP address 192.168.39.76 and MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:49.167420  595895 main.go:141] libmachine: (addons-214441) Calling .DriverName
	I0929 11:30:49.168022  595895 main.go:141] libmachine: (addons-214441) Calling .DriverName
	I0929 11:30:49.168252  595895 main.go:141] libmachine: (addons-214441) Calling .DriverName
	I0929 11:30:49.168368  595895 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0929 11:30:49.168430  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHHostname
	I0929 11:30:49.168489  595895 ssh_runner.go:195] Run: cat /version.json
	I0929 11:30:49.168513  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHHostname
	I0929 11:30:49.172018  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:49.172253  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:49.172513  595895 main.go:141] libmachine: (addons-214441) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:9c:d8", ip: ""} in network mk-addons-214441: {Iface:virbr1 ExpiryTime:2025-09-29 12:30:43 +0000 UTC Type:0 Mac:52:54:00:98:9c:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:addons-214441 Clientid:01:52:54:00:98:9c:d8}
	I0929 11:30:49.172540  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined IP address 192.168.39.76 and MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:49.172628  595895 main.go:141] libmachine: (addons-214441) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:9c:d8", ip: ""} in network mk-addons-214441: {Iface:virbr1 ExpiryTime:2025-09-29 12:30:43 +0000 UTC Type:0 Mac:52:54:00:98:9c:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:addons-214441 Clientid:01:52:54:00:98:9c:d8}
	I0929 11:30:49.172666  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined IP address 192.168.39.76 and MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:49.172701  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHPort
	I0929 11:30:49.172958  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHPort
	I0929 11:30:49.173000  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHKeyPath
	I0929 11:30:49.173136  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHKeyPath
	I0929 11:30:49.173213  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHUsername
	I0929 11:30:49.173301  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHUsername
	I0929 11:30:49.173395  595895 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21654-591397/.minikube/machines/addons-214441/id_rsa Username:docker}
	I0929 11:30:49.173457  595895 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21654-591397/.minikube/machines/addons-214441/id_rsa Username:docker}
	I0929 11:30:49.251709  595895 ssh_runner.go:195] Run: systemctl --version
	I0929 11:30:49.275600  595895 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0929 11:30:49.282636  595895 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0929 11:30:49.282710  595895 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0929 11:30:49.304880  595895 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0929 11:30:49.304913  595895 start.go:495] detecting cgroup driver to use...
	I0929 11:30:49.305043  595895 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0929 11:30:49.330757  595895 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0929 11:30:49.345061  595895 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0929 11:30:49.359226  595895 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0929 11:30:49.359329  595895 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0929 11:30:49.373874  595895 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0929 11:30:49.388075  595895 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0929 11:30:49.401811  595895 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0929 11:30:49.415626  595895 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0929 11:30:49.431189  595895 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0929 11:30:49.445445  595895 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0929 11:30:49.459477  595895 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0929 11:30:49.473176  595895 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0929 11:30:49.485689  595895 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0929 11:30:49.485783  595895 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0929 11:30:49.499975  595895 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0929 11:30:49.513013  595895 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 11:30:49.660311  595895 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0929 11:30:49.703655  595895 start.go:495] detecting cgroup driver to use...
	I0929 11:30:49.703755  595895 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0929 11:30:49.722813  595895 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0929 11:30:49.750032  595895 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0929 11:30:49.777529  595895 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0929 11:30:49.795732  595895 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0929 11:30:49.813375  595895 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0929 11:30:49.851205  595895 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0929 11:30:49.869489  595895 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0929 11:30:49.896122  595895 ssh_runner.go:195] Run: which cri-dockerd
	I0929 11:30:49.900877  595895 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0929 11:30:49.914013  595895 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I0929 11:30:49.937663  595895 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0929 11:30:50.087078  595895 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0929 11:30:50.258242  595895 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I0929 11:30:50.258407  595895 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0929 11:30:50.281600  595895 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0929 11:30:50.297843  595895 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 11:30:50.442188  595895 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0929 11:30:51.468324  595895 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.026092315s)
	I0929 11:30:51.468405  595895 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0929 11:30:51.485284  595895 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0929 11:30:51.502338  595895 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0929 11:30:51.520247  595895 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0929 11:30:51.674618  595895 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0929 11:30:51.823542  595895 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 11:30:51.969743  595895 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0929 11:30:52.010885  595895 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0929 11:30:52.027992  595895 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 11:30:52.187556  595895 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0929 11:30:52.300820  595895 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0929 11:30:52.324658  595895 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0929 11:30:52.324786  595895 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0929 11:30:52.331994  595895 start.go:563] Will wait 60s for crictl version
	I0929 11:30:52.332070  595895 ssh_runner.go:195] Run: which crictl
	I0929 11:30:52.336923  595895 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0929 11:30:52.378177  595895 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.4.0
	RuntimeApiVersion:  v1
	I0929 11:30:52.378280  595895 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0929 11:30:52.410851  595895 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0929 11:30:52.543475  595895 out.go:252] * Preparing Kubernetes v1.34.0 on Docker 28.4.0 ...
	I0929 11:30:52.543553  595895 main.go:141] libmachine: (addons-214441) Calling .GetIP
	I0929 11:30:52.546859  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:52.547288  595895 main.go:141] libmachine: (addons-214441) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:9c:d8", ip: ""} in network mk-addons-214441: {Iface:virbr1 ExpiryTime:2025-09-29 12:30:43 +0000 UTC Type:0 Mac:52:54:00:98:9c:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:addons-214441 Clientid:01:52:54:00:98:9c:d8}
	I0929 11:30:52.547313  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined IP address 192.168.39.76 and MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:30:52.547612  595895 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0929 11:30:52.553031  595895 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0929 11:30:52.570843  595895 kubeadm.go:875] updating cluster {Name:addons-214441 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-214
441 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.76 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fal
se DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0929 11:30:52.570982  595895 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0929 11:30:52.571045  595895 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0929 11:30:52.589813  595895 docker.go:691] Got preloaded images: 
	I0929 11:30:52.589850  595895 docker.go:697] registry.k8s.io/kube-apiserver:v1.34.0 wasn't preloaded
	I0929 11:30:52.589920  595895 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0929 11:30:52.603859  595895 ssh_runner.go:195] Run: which lz4
	I0929 11:30:52.608929  595895 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0929 11:30:52.614449  595895 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0929 11:30:52.614480  595895 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21654-591397/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (353447550 bytes)
	I0929 11:30:54.030641  595895 docker.go:655] duration metric: took 1.421784291s to copy over tarball
	I0929 11:30:54.030729  595895 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0929 11:30:55.448691  595895 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.417923545s)
	I0929 11:30:55.448737  595895 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0929 11:30:55.496341  595895 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0929 11:30:55.514175  595895 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2632 bytes)
	I0929 11:30:55.539628  595895 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0929 11:30:55.556201  595895 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 11:30:55.705196  595895 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0929 11:30:57.773379  595895 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.068131004s)
	I0929 11:30:57.773509  595895 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0929 11:30:57.795878  595895 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.0
	registry.k8s.io/kube-controller-manager:v1.34.0
	registry.k8s.io/kube-proxy:v1.34.0
	registry.k8s.io/kube-scheduler:v1.34.0
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0929 11:30:57.795910  595895 cache_images.go:85] Images are preloaded, skipping loading
	I0929 11:30:57.795931  595895 kubeadm.go:926] updating node { 192.168.39.76 8443 v1.34.0 docker true true} ...
	I0929 11:30:57.796049  595895 kubeadm.go:938] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-214441 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.76
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:addons-214441 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0929 11:30:57.796127  595895 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0929 11:30:57.852690  595895 cni.go:84] Creating CNI manager for ""
	I0929 11:30:57.852756  595895 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0929 11:30:57.852774  595895 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0929 11:30:57.852803  595895 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.76 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-214441 NodeName:addons-214441 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.76"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.76 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/ku
bernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0929 11:30:57.852981  595895 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.76
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-214441"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.76"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.76"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0929 11:30:57.853053  595895 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0929 11:30:57.866164  595895 binaries.go:44] Found k8s binaries, skipping transfer
	I0929 11:30:57.866236  595895 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0929 11:30:57.879054  595895 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0929 11:30:57.901136  595895 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0929 11:30:57.922808  595895 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2217 bytes)
	I0929 11:30:57.944391  595895 ssh_runner.go:195] Run: grep 192.168.39.76	control-plane.minikube.internal$ /etc/hosts
	I0929 11:30:57.949077  595895 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.76	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0929 11:30:57.965713  595895 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 11:30:58.115608  595895 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0929 11:30:58.151915  595895 certs.go:68] Setting up /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/addons-214441 for IP: 192.168.39.76
	I0929 11:30:58.151940  595895 certs.go:194] generating shared ca certs ...
	I0929 11:30:58.151960  595895 certs.go:226] acquiring lock for ca certs: {Name:mk707c73ecd79d5343eca8617a792346e0c7ccb4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 11:30:58.152119  595895 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21654-591397/.minikube/ca.key
	I0929 11:30:58.470474  595895 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21654-591397/.minikube/ca.crt ...
	I0929 11:30:58.470507  595895 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21654-591397/.minikube/ca.crt: {Name:mk182656d7edea57f023d2e0db199cb4225a8b4e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 11:30:58.470704  595895 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21654-591397/.minikube/ca.key ...
	I0929 11:30:58.470715  595895 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21654-591397/.minikube/ca.key: {Name:mkd9949b3876b9f68542fba6d581787f4502134f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 11:30:58.470791  595895 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21654-591397/.minikube/proxy-client-ca.key
	I0929 11:30:58.721631  595895 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21654-591397/.minikube/proxy-client-ca.crt ...
	I0929 11:30:58.721664  595895 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21654-591397/.minikube/proxy-client-ca.crt: {Name:mk28d9b982dd4335b19ce60c764e1cd1a4d53764 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 11:30:58.721838  595895 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21654-591397/.minikube/proxy-client-ca.key ...
	I0929 11:30:58.721850  595895 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21654-591397/.minikube/proxy-client-ca.key: {Name:mk92f9d60795b7f581dcb4003e857f2fb68fb997 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 11:30:58.721920  595895 certs.go:256] generating profile certs ...
	I0929 11:30:58.721989  595895 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/addons-214441/client.key
	I0929 11:30:58.722004  595895 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/addons-214441/client.crt with IP's: []
	I0929 11:30:59.043304  595895 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/addons-214441/client.crt ...
	I0929 11:30:59.043336  595895 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/addons-214441/client.crt: {Name:mkd724da95490eed1b0581ef6c65a2b1785468b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 11:30:59.043499  595895 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/addons-214441/client.key ...
	I0929 11:30:59.043510  595895 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/addons-214441/client.key: {Name:mkba543125a928af6b44a2eb304c49514c816581 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 11:30:59.043578  595895 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/addons-214441/apiserver.key.adbef5ab
	I0929 11:30:59.043598  595895 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/addons-214441/apiserver.crt.adbef5ab with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.76]
	I0929 11:30:59.456164  595895 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/addons-214441/apiserver.crt.adbef5ab ...
	I0929 11:30:59.456200  595895 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/addons-214441/apiserver.crt.adbef5ab: {Name:mk5a23687be38fbd7ef5257880d1d7f5b199f933 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 11:30:59.456424  595895 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/addons-214441/apiserver.key.adbef5ab ...
	I0929 11:30:59.456443  595895 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/addons-214441/apiserver.key.adbef5ab: {Name:mke7b9b847497d2728644e9b30a8393a50e57e5a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 11:30:59.456526  595895 certs.go:381] copying /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/addons-214441/apiserver.crt.adbef5ab -> /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/addons-214441/apiserver.crt
	I0929 11:30:59.456638  595895 certs.go:385] copying /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/addons-214441/apiserver.key.adbef5ab -> /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/addons-214441/apiserver.key
	I0929 11:30:59.456705  595895 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/addons-214441/proxy-client.key
	I0929 11:30:59.456726  595895 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/addons-214441/proxy-client.crt with IP's: []
	I0929 11:30:59.785388  595895 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/addons-214441/proxy-client.crt ...
	I0929 11:30:59.785424  595895 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/addons-214441/proxy-client.crt: {Name:mkb2afc6ab3119c9842fe1ce2f48d7c6196dbfb1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 11:30:59.785611  595895 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/addons-214441/proxy-client.key ...
	I0929 11:30:59.785642  595895 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/addons-214441/proxy-client.key: {Name:mk6b37b3ae22881d553c47031d96c6f22bdfded2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 11:30:59.785833  595895 certs.go:484] found cert: /home/jenkins/minikube-integration/21654-591397/.minikube/certs/ca-key.pem (1675 bytes)
	I0929 11:30:59.785879  595895 certs.go:484] found cert: /home/jenkins/minikube-integration/21654-591397/.minikube/certs/ca.pem (1082 bytes)
	I0929 11:30:59.785905  595895 certs.go:484] found cert: /home/jenkins/minikube-integration/21654-591397/.minikube/certs/cert.pem (1123 bytes)
	I0929 11:30:59.785932  595895 certs.go:484] found cert: /home/jenkins/minikube-integration/21654-591397/.minikube/certs/key.pem (1675 bytes)
	I0929 11:30:59.786662  595895 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21654-591397/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0929 11:30:59.821270  595895 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21654-591397/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0929 11:30:59.853588  595895 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21654-591397/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0929 11:30:59.885559  595895 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21654-591397/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0929 11:30:59.916538  595895 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/addons-214441/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0929 11:30:59.948991  595895 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/addons-214441/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0929 11:30:59.981478  595895 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/addons-214441/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0929 11:31:00.014753  595895 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/addons-214441/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0929 11:31:00.046891  595895 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21654-591397/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0929 11:31:00.079370  595895 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0929 11:31:00.101600  595895 ssh_runner.go:195] Run: openssl version
	I0929 11:31:00.108829  595895 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0929 11:31:00.123448  595895 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0929 11:31:00.129416  595895 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 29 11:30 /usr/share/ca-certificates/minikubeCA.pem
	I0929 11:31:00.129502  595895 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0929 11:31:00.137583  595895 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0929 11:31:00.152396  595895 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0929 11:31:00.157895  595895 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0929 11:31:00.157960  595895 kubeadm.go:392] StartCluster: {Name:addons-214441 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-214441
Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.76 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 11:31:00.158083  595895 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0929 11:31:00.176917  595895 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0929 11:31:00.190119  595895 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0929 11:31:00.203558  595895 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0929 11:31:00.216736  595895 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0929 11:31:00.216758  595895 kubeadm.go:157] found existing configuration files:
	
	I0929 11:31:00.216805  595895 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0929 11:31:00.229008  595895 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0929 11:31:00.229138  595895 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0929 11:31:00.242441  595895 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0929 11:31:00.254460  595895 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0929 11:31:00.254523  595895 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0929 11:31:00.268124  595895 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0929 11:31:00.284523  595895 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0929 11:31:00.284596  595895 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0929 11:31:00.297510  595895 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0929 11:31:00.311858  595895 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0929 11:31:00.311927  595895 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0929 11:31:00.329319  595895 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0929 11:31:00.392668  595895 kubeadm.go:310] [init] Using Kubernetes version: v1.34.0
	I0929 11:31:00.392776  595895 kubeadm.go:310] [preflight] Running pre-flight checks
	I0929 11:31:00.500945  595895 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0929 11:31:00.501073  595895 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0929 11:31:00.501248  595895 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0929 11:31:00.518470  595895 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0929 11:31:00.521672  595895 out.go:252]   - Generating certificates and keys ...
	I0929 11:31:00.521778  595895 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0929 11:31:00.521835  595895 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0929 11:31:00.844406  595895 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0929 11:31:01.356940  595895 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0929 11:31:01.469316  595895 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0929 11:31:01.609628  595895 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0929 11:31:01.854048  595895 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0929 11:31:01.854239  595895 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-214441 localhost] and IPs [192.168.39.76 127.0.0.1 ::1]
	I0929 11:31:02.222219  595895 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0929 11:31:02.222361  595895 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-214441 localhost] and IPs [192.168.39.76 127.0.0.1 ::1]
	I0929 11:31:02.331774  595895 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0929 11:31:02.452417  595895 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0929 11:31:03.277600  595895 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0929 11:31:03.277709  595895 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0929 11:31:03.337296  595895 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0929 11:31:03.576740  595895 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0929 11:31:03.754957  595895 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0929 11:31:04.028596  595895 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0929 11:31:04.458901  595895 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0929 11:31:04.459731  595895 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0929 11:31:04.461956  595895 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0929 11:31:04.463895  595895 out.go:252]   - Booting up control plane ...
	I0929 11:31:04.464031  595895 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0929 11:31:04.464116  595895 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0929 11:31:04.464220  595895 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0929 11:31:04.482430  595895 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0929 11:31:04.482595  595895 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0929 11:31:04.490659  595895 kubeadm.go:310] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0929 11:31:04.490827  595895 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0929 11:31:04.490920  595895 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0929 11:31:04.666361  595895 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0929 11:31:04.666495  595895 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0929 11:31:05.175870  595895 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 510.006022ms
	I0929 11:31:05.187944  595895 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0929 11:31:05.188057  595895 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.39.76:8443/livez
	I0929 11:31:05.188256  595895 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0929 11:31:05.188362  595895 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0929 11:31:07.767053  595895 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 2.579446651s
	I0929 11:31:09.215755  595895 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 4.029766048s
	I0929 11:31:11.189186  595895 kubeadm.go:310] [control-plane-check] kube-apiserver is healthy after 6.002998119s
	I0929 11:31:11.214239  595895 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0929 11:31:11.232892  595895 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0929 11:31:11.255389  595895 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0929 11:31:11.255580  595895 kubeadm.go:310] [mark-control-plane] Marking the node addons-214441 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0929 11:31:11.270844  595895 kubeadm.go:310] [bootstrap-token] Using token: 7wgemt.sdnt4jx2dgy9ll51
	I0929 11:31:11.272442  595895 out.go:252]   - Configuring RBAC rules ...
	I0929 11:31:11.272557  595895 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0929 11:31:11.279364  595895 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0929 11:31:11.294463  595895 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0929 11:31:11.298793  595895 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0929 11:31:11.306582  595895 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0929 11:31:11.323727  595895 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0929 11:31:11.601710  595895 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0929 11:31:12.069553  595895 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0929 11:31:12.597044  595895 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0929 11:31:12.597931  595895 kubeadm.go:310] 
	I0929 11:31:12.598017  595895 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0929 11:31:12.598026  595895 kubeadm.go:310] 
	I0929 11:31:12.598142  595895 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0929 11:31:12.598153  595895 kubeadm.go:310] 
	I0929 11:31:12.598181  595895 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0929 11:31:12.598281  595895 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0929 11:31:12.598374  595895 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0929 11:31:12.598390  595895 kubeadm.go:310] 
	I0929 11:31:12.598436  595895 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0929 11:31:12.598442  595895 kubeadm.go:310] 
	I0929 11:31:12.598481  595895 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0929 11:31:12.598497  595895 kubeadm.go:310] 
	I0929 11:31:12.598577  595895 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0929 11:31:12.598692  595895 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0929 11:31:12.598809  595895 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0929 11:31:12.598828  595895 kubeadm.go:310] 
	I0929 11:31:12.598937  595895 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0929 11:31:12.599041  595895 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0929 11:31:12.599055  595895 kubeadm.go:310] 
	I0929 11:31:12.599196  595895 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 7wgemt.sdnt4jx2dgy9ll51 \
	I0929 11:31:12.599332  595895 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f2c60916dfe453d33b3c30594ac6ac02ea520a59a4bdac7119cd9e587175e1eb \
	I0929 11:31:12.599365  595895 kubeadm.go:310] 	--control-plane 
	I0929 11:31:12.599397  595895 kubeadm.go:310] 
	I0929 11:31:12.599486  595895 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0929 11:31:12.599496  595895 kubeadm.go:310] 
	I0929 11:31:12.599568  595895 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 7wgemt.sdnt4jx2dgy9ll51 \
	I0929 11:31:12.599705  595895 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f2c60916dfe453d33b3c30594ac6ac02ea520a59a4bdac7119cd9e587175e1eb 
	I0929 11:31:12.601217  595895 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0929 11:31:12.601272  595895 cni.go:84] Creating CNI manager for ""
	I0929 11:31:12.601305  595895 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0929 11:31:12.603223  595895 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I0929 11:31:12.604766  595895 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0929 11:31:12.618554  595895 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0929 11:31:12.641768  595895 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0929 11:31:12.641942  595895 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 11:31:12.641954  595895 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-214441 minikube.k8s.io/updated_at=2025_09_29T11_31_12_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=b8c533eb3dce8338d3d4e7231ea97d1c44ed6f81 minikube.k8s.io/name=addons-214441 minikube.k8s.io/primary=true
	I0929 11:31:12.682767  595895 ops.go:34] apiserver oom_adj: -16
	I0929 11:31:12.800130  595895 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 11:31:13.300439  595895 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 11:31:13.800339  595895 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 11:31:14.300644  595895 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 11:31:14.800381  595895 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 11:31:15.301049  595895 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 11:31:15.801207  595895 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 11:31:16.301226  595895 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 11:31:16.801024  595895 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 11:31:17.300849  595895 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 11:31:17.440215  595895 kubeadm.go:1105] duration metric: took 4.798376612s to wait for elevateKubeSystemPrivileges
	I0929 11:31:17.440271  595895 kubeadm.go:394] duration metric: took 17.282308974s to StartCluster
	I0929 11:31:17.440297  595895 settings.go:142] acquiring lock: {Name:mk832bb073af4ae47756dd4494ea087d7aa99c2a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 11:31:17.440448  595895 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21654-591397/kubeconfig
	I0929 11:31:17.441186  595895 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21654-591397/kubeconfig: {Name:mk64b4db01785e3abeedb000f7d1263b1f56db2d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 11:31:17.441409  595895 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0929 11:31:17.441416  595895 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.76 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0929 11:31:17.441496  595895 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0929 11:31:17.441684  595895 config.go:182] Loaded profile config "addons-214441": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0929 11:31:17.441696  595895 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-214441"
	I0929 11:31:17.441708  595895 addons.go:69] Setting yakd=true in profile "addons-214441"
	I0929 11:31:17.441736  595895 addons.go:238] Setting addon yakd=true in "addons-214441"
	I0929 11:31:17.441757  595895 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-214441"
	I0929 11:31:17.441709  595895 addons.go:69] Setting ingress=true in profile "addons-214441"
	I0929 11:31:17.441784  595895 addons.go:238] Setting addon ingress=true in "addons-214441"
	I0929 11:31:17.441793  595895 host.go:66] Checking if "addons-214441" exists ...
	I0929 11:31:17.441803  595895 host.go:66] Checking if "addons-214441" exists ...
	I0929 11:31:17.441799  595895 addons.go:69] Setting default-storageclass=true in profile "addons-214441"
	I0929 11:31:17.441840  595895 addons.go:69] Setting gcp-auth=true in profile "addons-214441"
	I0929 11:31:17.441876  595895 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-214441"
	I0929 11:31:17.441886  595895 mustload.go:65] Loading cluster: addons-214441
	I0929 11:31:17.441893  595895 host.go:66] Checking if "addons-214441" exists ...
	I0929 11:31:17.442145  595895 addons.go:69] Setting registry=true in profile "addons-214441"
	I0929 11:31:17.442160  595895 addons.go:238] Setting addon registry=true in "addons-214441"
	I0929 11:31:17.442191  595895 host.go:66] Checking if "addons-214441" exists ...
	I0929 11:31:17.442280  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:31:17.442300  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:31:17.442353  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:31:17.442366  595895 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-214441"
	I0929 11:31:17.442371  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:31:17.442380  595895 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-214441"
	I0929 11:31:17.442381  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:31:17.442385  595895 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-214441"
	I0929 11:31:17.442396  595895 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-214441"
	I0929 11:31:17.442399  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:31:17.442425  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:31:17.442400  595895 addons.go:69] Setting cloud-spanner=true in profile "addons-214441"
	I0929 11:31:17.442448  595895 addons.go:69] Setting registry-creds=true in profile "addons-214441"
	I0929 11:31:17.442456  595895 addons.go:238] Setting addon cloud-spanner=true in "addons-214441"
	I0929 11:31:17.442469  595895 addons.go:238] Setting addon registry-creds=true in "addons-214441"
	I0929 11:31:17.442478  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:31:17.442491  595895 addons.go:69] Setting storage-provisioner=true in profile "addons-214441"
	I0929 11:31:17.442514  595895 addons.go:238] Setting addon storage-provisioner=true in "addons-214441"
	I0929 11:31:17.442543  595895 host.go:66] Checking if "addons-214441" exists ...
	I0929 11:31:17.442544  595895 addons.go:69] Setting inspektor-gadget=true in profile "addons-214441"
	I0929 11:31:17.442557  595895 addons.go:238] Setting addon inspektor-gadget=true in "addons-214441"
	I0929 11:31:17.442563  595895 addons.go:69] Setting ingress-dns=true in profile "addons-214441"
	I0929 11:31:17.442575  595895 addons.go:238] Setting addon ingress-dns=true in "addons-214441"
	I0929 11:31:17.442588  595895 addons.go:69] Setting metrics-server=true in profile "addons-214441"
	I0929 11:31:17.442591  595895 addons.go:69] Setting volumesnapshots=true in profile "addons-214441"
	I0929 11:31:17.442599  595895 addons.go:238] Setting addon metrics-server=true in "addons-214441"
	I0929 11:31:17.442610  595895 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-214441"
	I0929 11:31:17.442602  595895 config.go:182] Loaded profile config "addons-214441": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0929 11:31:17.442620  595895 addons.go:238] Setting addon volumesnapshots=true in "addons-214441"
	I0929 11:31:17.442622  595895 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-214441"
	I0929 11:31:17.442631  595895 addons.go:69] Setting volcano=true in profile "addons-214441"
	I0929 11:31:17.442647  595895 addons.go:238] Setting addon volcano=true in "addons-214441"
	I0929 11:31:17.442826  595895 host.go:66] Checking if "addons-214441" exists ...
	I0929 11:31:17.442847  595895 host.go:66] Checking if "addons-214441" exists ...
	I0929 11:31:17.442963  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:31:17.443004  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:31:17.443177  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:31:17.443198  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:31:17.443212  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:31:17.443242  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:31:17.443255  595895 host.go:66] Checking if "addons-214441" exists ...
	I0929 11:31:17.443270  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:31:17.443292  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:31:17.443439  595895 out.go:179] * Verifying Kubernetes components...
	I0929 11:31:17.443489  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:31:17.443521  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:31:17.443564  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:31:17.443603  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:31:17.443459  595895 host.go:66] Checking if "addons-214441" exists ...
	I0929 11:31:17.443699  595895 host.go:66] Checking if "addons-214441" exists ...
	I0929 11:31:17.443852  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:31:17.443879  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:31:17.443895  595895 host.go:66] Checking if "addons-214441" exists ...
	I0929 11:31:17.444137  595895 host.go:66] Checking if "addons-214441" exists ...
	I0929 11:31:17.444199  595895 host.go:66] Checking if "addons-214441" exists ...
	I0929 11:31:17.444468  595895 host.go:66] Checking if "addons-214441" exists ...
	I0929 11:31:17.454269  595895 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 11:31:17.455462  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:31:17.455556  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:31:17.457160  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:31:17.457213  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:31:17.458697  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:31:17.458765  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:31:17.459732  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37039
	I0929 11:31:17.459901  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:31:17.459979  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:31:17.460127  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:31:17.460161  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:31:17.460170  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:31:17.460239  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:31:17.460291  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44679
	I0929 11:31:17.460695  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:31:17.463901  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:31:17.463928  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:31:17.464092  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:31:17.465162  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:31:17.465408  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:31:17.466171  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:31:17.466824  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:31:17.467158  595895 main.go:141] libmachine: (addons-214441) Calling .GetState
	I0929 11:31:17.479447  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:31:17.479512  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:31:17.482323  595895 addons.go:238] Setting addon default-storageclass=true in "addons-214441"
	I0929 11:31:17.482391  595895 host.go:66] Checking if "addons-214441" exists ...
	I0929 11:31:17.482773  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:31:17.482798  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:31:17.493064  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45797
	I0929 11:31:17.493710  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40233
	I0929 11:31:17.496980  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:31:17.497697  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:31:17.497723  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:31:17.498583  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:31:17.499544  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:31:17.500891  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:31:17.502188  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:31:17.503325  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:31:17.503345  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:31:17.503676  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33541
	I0929 11:31:17.503826  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:31:17.504644  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:31:17.504730  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:31:17.505209  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:31:17.506256  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:31:17.506279  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:31:17.506340  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36303
	I0929 11:31:17.506984  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46129
	I0929 11:31:17.507294  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:31:17.507677  595895 main.go:141] libmachine: (addons-214441) Calling .GetState
	I0929 11:31:17.507745  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37135
	I0929 11:31:17.508552  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:31:17.509057  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:31:17.509394  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:31:17.509407  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:31:17.509415  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:31:17.510041  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:31:17.510142  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:31:17.510163  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:31:17.511579  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:31:17.513259  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:31:17.513521  595895 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-214441"
	I0929 11:31:17.513538  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46151
	I0929 11:31:17.513575  595895 host.go:66] Checking if "addons-214441" exists ...
	I0929 11:31:17.514124  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:31:17.514166  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:31:17.511927  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:31:17.514352  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:31:17.513596  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42277
	I0929 11:31:17.520718  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:31:17.520752  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44361
	I0929 11:31:17.521039  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:31:17.521092  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33947
	I0929 11:31:17.521207  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43367
	I0929 11:31:17.520724  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37739
	I0929 11:31:17.522317  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:31:17.522444  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:31:17.522469  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:31:17.522507  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:31:17.522852  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:31:17.522920  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:31:17.523211  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:31:17.523225  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:31:17.523306  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:31:17.523461  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:31:17.523473  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:31:17.524082  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:31:17.524376  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:31:17.524523  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:31:17.524535  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:31:17.524631  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:31:17.524746  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34633
	I0929 11:31:17.529249  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:31:17.529354  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:31:17.529387  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:31:17.529766  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:31:17.529766  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:31:17.529799  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:31:17.529807  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:31:17.529908  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:31:17.530061  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:31:17.530343  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:31:17.530371  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:31:17.530465  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:31:17.530878  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:31:17.530932  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:31:17.531382  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:31:17.531639  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:31:17.531658  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:31:17.532124  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:31:17.532483  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:31:17.533015  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:31:17.533033  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:31:17.533472  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:31:17.533508  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:31:17.534270  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:31:17.535229  595895 main.go:141] libmachine: (addons-214441) Calling .GetState
	I0929 11:31:17.535779  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:31:17.535886  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:31:17.537511  595895 host.go:66] Checking if "addons-214441" exists ...
	I0929 11:31:17.538187  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39035
	I0929 11:31:17.539952  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:31:17.540005  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:31:17.540222  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39063
	I0929 11:31:17.540575  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43245
	I0929 11:31:17.540786  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:31:17.540854  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:31:17.540890  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:31:17.541625  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:31:17.541647  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:31:17.542032  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:31:17.542195  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:31:17.542600  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:31:17.543176  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:31:17.543185  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:31:17.543199  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:31:17.543204  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:31:17.543307  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32865
	I0929 11:31:17.544136  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:31:17.544545  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:31:17.544610  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:31:17.544640  595895 main.go:141] libmachine: (addons-214441) Calling .GetState
	I0929 11:31:17.545415  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:31:17.545449  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:31:17.546464  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:31:17.546490  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:31:17.546965  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:31:17.547387  595895 main.go:141] libmachine: (addons-214441) Calling .GetState
	I0929 11:31:17.548714  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:31:17.548795  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:31:17.550669  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38149
	I0929 11:31:17.551412  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:31:17.551773  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34225
	I0929 11:31:17.552171  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:31:17.552255  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:31:17.552199  595895 main.go:141] libmachine: (addons-214441) Calling .DriverName
	I0929 11:31:17.552753  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:31:17.552854  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:31:17.553685  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:31:17.553778  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:31:17.554307  595895 main.go:141] libmachine: (addons-214441) Calling .GetState
	I0929 11:31:17.554514  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:31:17.555149  595895 main.go:141] libmachine: (addons-214441) Calling .GetState
	I0929 11:31:17.557383  595895 main.go:141] libmachine: (addons-214441) Calling .DriverName
	I0929 11:31:17.558025  595895 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0929 11:31:17.559210  595895 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0929 11:31:17.559231  595895 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0929 11:31:17.559262  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHHostname
	I0929 11:31:17.559338  595895 out.go:179]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.12.2
	I0929 11:31:17.560620  595895 main.go:141] libmachine: (addons-214441) Calling .DriverName
	I0929 11:31:17.560681  595895 out.go:179]   - Using image docker.io/volcanosh/vc-controller-manager:v1.12.2
	I0929 11:31:17.560823  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34665
	I0929 11:31:17.561393  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:31:17.562236  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:31:17.562295  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:31:17.562751  595895 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.3
	I0929 11:31:17.563140  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:31:17.563492  595895 out.go:179]   - Using image docker.io/volcanosh/vc-scheduler:v1.12.2
	I0929 11:31:17.564252  595895 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0929 11:31:17.564269  595895 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0929 11:31:17.564289  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:31:17.564293  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHHostname
	I0929 11:31:17.564684  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:31:17.564737  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:31:17.565023  595895 main.go:141] libmachine: (addons-214441) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:9c:d8", ip: ""} in network mk-addons-214441: {Iface:virbr1 ExpiryTime:2025-09-29 12:30:43 +0000 UTC Type:0 Mac:52:54:00:98:9c:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:addons-214441 Clientid:01:52:54:00:98:9c:d8}
	I0929 11:31:17.565146  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined IP address 192.168.39.76 and MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:31:17.567800  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHPort
	I0929 11:31:17.568057  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHKeyPath
	I0929 11:31:17.568262  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36805
	I0929 11:31:17.568522  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHUsername
	I0929 11:31:17.568701  595895 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21654-591397/.minikube/machines/addons-214441/id_rsa Username:docker}
	I0929 11:31:17.569229  595895 addons.go:435] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0929 11:31:17.569253  595895 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (498149 bytes)
	I0929 11:31:17.569273  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHHostname
	I0929 11:31:17.569959  595895 main.go:141] libmachine: (addons-214441) Calling .DriverName
	I0929 11:31:17.570047  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:31:17.572257  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:31:17.572409  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:31:17.572423  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:31:17.573470  595895 main.go:141] libmachine: (addons-214441) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:9c:d8", ip: ""} in network mk-addons-214441: {Iface:virbr1 ExpiryTime:2025-09-29 12:30:43 +0000 UTC Type:0 Mac:52:54:00:98:9c:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:addons-214441 Clientid:01:52:54:00:98:9c:d8}
	I0929 11:31:17.573495  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined IP address 192.168.39.76 and MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:31:17.573534  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42077
	I0929 11:31:17.574161  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:31:17.574166  595895 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I0929 11:31:17.574420  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:31:17.574975  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:31:17.575036  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:31:17.575329  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHPort
	I0929 11:31:17.575415  595895 main.go:141] libmachine: (addons-214441) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:9c:d8", ip: ""} in network mk-addons-214441: {Iface:virbr1 ExpiryTime:2025-09-29 12:30:43 +0000 UTC Type:0 Mac:52:54:00:98:9c:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:addons-214441 Clientid:01:52:54:00:98:9c:d8}
	I0929 11:31:17.575430  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined IP address 192.168.39.76 and MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:31:17.575671  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHKeyPath
	I0929 11:31:17.575865  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHUsername
	I0929 11:31:17.576099  595895 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21654-591397/.minikube/machines/addons-214441/id_rsa Username:docker}
	I0929 11:31:17.577061  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHPort
	I0929 11:31:17.577247  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHKeyPath
	I0929 11:31:17.577378  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHUsername
	I0929 11:31:17.577535  595895 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21654-591397/.minikube/machines/addons-214441/id_rsa Username:docker}
	I0929 11:31:17.577554  595895 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0929 11:31:17.577582  595895 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0929 11:31:17.577605  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHHostname
	I0929 11:31:17.579736  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40239
	I0929 11:31:17.580597  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:31:17.581383  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:31:17.581446  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:31:17.582289  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:31:17.582694  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44947
	I0929 11:31:17.582952  595895 main.go:141] libmachine: (addons-214441) Calling .DriverName
	I0929 11:31:17.583853  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:31:17.585630  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46433
	I0929 11:31:17.585637  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38605
	I0929 11:31:17.586733  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:31:17.586755  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:31:17.586846  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:31:17.587240  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:31:17.587458  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:31:17.587548  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:31:17.587503  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37799
	I0929 11:31:17.588342  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:31:17.588817  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:31:17.588838  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:31:17.589534  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35991
	I0929 11:31:17.589680  595895 main.go:141] libmachine: (addons-214441) Calling .GetState
	I0929 11:31:17.589727  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:31:17.589953  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:31:17.590461  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:31:17.590684  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:31:17.590701  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:31:17.590814  595895 main.go:141] libmachine: (addons-214441) Calling .GetState
	I0929 11:31:17.590864  595895 main.go:141] libmachine: (addons-214441) Calling .GetState
	I0929 11:31:17.591866  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:31:17.592243  595895 main.go:141] libmachine: (addons-214441) Calling .GetState
	I0929 11:31:17.592985  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:31:17.593774  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:31:17.593791  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:31:17.594759  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:31:17.595210  595895 main.go:141] libmachine: (addons-214441) Calling .GetState
	I0929 11:31:17.595390  595895 main.go:141] libmachine: (addons-214441) Calling .DriverName
	I0929 11:31:17.596824  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:31:17.597871  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:31:17.598227  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:31:17.598762  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35305
	I0929 11:31:17.599344  595895 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0929 11:31:17.600871  595895 main.go:141] libmachine: (addons-214441) Calling .DriverName
	I0929 11:31:17.600871  595895 main.go:141] libmachine: (addons-214441) Calling .DriverName
	I0929 11:31:17.600871  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:31:17.600928  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:31:17.600961  595895 main.go:141] libmachine: (addons-214441) Calling .DriverName
	I0929 11:31:17.600994  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39573
	I0929 11:31:17.601002  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43843
	I0929 11:31:17.601641  595895 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0929 11:31:17.601827  595895 main.go:141] libmachine: (addons-214441) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:9c:d8", ip: ""} in network mk-addons-214441: {Iface:virbr1 ExpiryTime:2025-09-29 12:30:43 +0000 UTC Type:0 Mac:52:54:00:98:9c:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:addons-214441 Clientid:01:52:54:00:98:9c:d8}
	I0929 11:31:17.601850  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined IP address 192.168.39.76 and MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:31:17.601913  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:31:17.602052  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:31:17.602151  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39993
	I0929 11:31:17.602155  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:31:17.602306  595895 main.go:141] libmachine: (addons-214441) Calling .GetState
	I0929 11:31:17.602590  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:31:17.602610  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:31:17.602811  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHPort
	I0929 11:31:17.602977  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHKeyPath
	I0929 11:31:17.603038  595895 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0929 11:31:17.603089  595895 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0929 11:31:17.603260  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHUsername
	I0929 11:31:17.603328  595895 main.go:141] libmachine: (addons-214441) Calling .DriverName
	I0929 11:31:17.603564  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:31:17.603593  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:31:17.603752  595895 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21654-591397/.minikube/machines/addons-214441/id_rsa Username:docker}
	I0929 11:31:17.604258  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:31:17.604320  595895 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I0929 11:31:17.604825  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:31:17.604525  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:31:17.605686  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:31:17.605694  595895 main.go:141] libmachine: (addons-214441) Calling .DriverName
	I0929 11:31:17.604846  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:31:17.604946  595895 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0929 11:31:17.605125  595895 main.go:141] libmachine: (addons-214441) Calling .GetState
	I0929 11:31:17.606062  595895 main.go:141] libmachine: (addons-214441) Calling .GetState
	I0929 11:31:17.606154  595895 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.44.1
	I0929 11:31:17.606169  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:31:17.606174  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:31:17.607283  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:31:17.607459  595895 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0929 11:31:17.607513  595895 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0929 11:31:17.608000  595895 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I0929 11:31:17.608022  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHHostname
	I0929 11:31:17.607722  595895 main.go:141] libmachine: (addons-214441) Calling .GetState
	I0929 11:31:17.607825  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:31:17.608327  595895 out.go:179]   - Using image docker.io/busybox:stable
	I0929 11:31:17.608504  595895 main.go:141] libmachine: (addons-214441) Calling .GetState
	I0929 11:31:17.609208  595895 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0929 11:31:17.609380  595895 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I0929 11:31:17.609617  595895 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I0929 11:31:17.609695  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHHostname
	I0929 11:31:17.609885  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35811
	I0929 11:31:17.610214  595895 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0929 11:31:17.610480  595895 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0929 11:31:17.610442  595895 main.go:141] libmachine: (addons-214441) Calling .DriverName
	I0929 11:31:17.610634  595895 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 11:31:17.610651  595895 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0929 11:31:17.610666  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHHostname
	I0929 11:31:17.610637  595895 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0929 11:31:17.610551  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHHostname
	I0929 11:31:17.611056  595895 main.go:141] libmachine: (addons-214441) Calling .DriverName
	I0929 11:31:17.611127  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:31:17.611242  595895 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.2
	I0929 11:31:17.612177  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:31:17.612200  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:31:17.612367  595895 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I0929 11:31:17.612539  595895 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.41
	I0929 11:31:17.612558  595895 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0929 11:31:17.612574  595895 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0929 11:31:17.612702  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHHostname
	I0929 11:31:17.612652  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:31:17.613066  595895 main.go:141] libmachine: (addons-214441) Calling .DriverName
	I0929 11:31:17.613132  595895 main.go:141] libmachine: (addons-214441) Calling .GetState
	I0929 11:31:17.613978  595895 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I0929 11:31:17.614058  595895 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0929 11:31:17.614157  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHHostname
	I0929 11:31:17.614015  595895 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I0929 11:31:17.614286  595895 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I0929 11:31:17.614314  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHHostname
	I0929 11:31:17.614339  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40205
	I0929 11:31:17.614532  595895 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0929 11:31:17.614774  595895 main.go:141] libmachine: (addons-214441) Calling .DriverName
	I0929 11:31:17.614918  595895 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0929 11:31:17.615384  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:31:17.615994  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:31:17.616036  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:31:17.616065  595895 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I0929 11:31:17.616139  595895 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0929 11:31:17.616150  595895 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0929 11:31:17.616217  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHHostname
	I0929 11:31:17.616451  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:31:17.616766  595895 main.go:141] libmachine: (addons-214441) Calling .GetState
	I0929 11:31:17.617254  595895 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0929 11:31:17.618390  595895 main.go:141] libmachine: (addons-214441) Calling .DriverName
	I0929 11:31:17.618595  595895 out.go:179]   - Using image docker.io/registry:3.0.0
	I0929 11:31:17.619658  595895 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I0929 11:31:17.619715  595895 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I0929 11:31:17.619728  595895 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0929 11:31:17.619752  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHHostname
	I0929 11:31:17.619788  595895 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0929 11:31:17.620191  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:31:17.620909  595895 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0929 11:31:17.620926  595895 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I0929 11:31:17.621015  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHHostname
	I0929 11:31:17.621216  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:31:17.622235  595895 main.go:141] libmachine: (addons-214441) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:9c:d8", ip: ""} in network mk-addons-214441: {Iface:virbr1 ExpiryTime:2025-09-29 12:30:43 +0000 UTC Type:0 Mac:52:54:00:98:9c:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:addons-214441 Clientid:01:52:54:00:98:9c:d8}
	I0929 11:31:17.622260  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined IP address 192.168.39.76 and MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:31:17.622296  595895 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0929 11:31:17.622987  595895 main.go:141] libmachine: (addons-214441) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:9c:d8", ip: ""} in network mk-addons-214441: {Iface:virbr1 ExpiryTime:2025-09-29 12:30:43 +0000 UTC Type:0 Mac:52:54:00:98:9c:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:addons-214441 Clientid:01:52:54:00:98:9c:d8}
	I0929 11:31:17.623010  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined IP address 192.168.39.76 and MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:31:17.623146  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHPort
	I0929 11:31:17.623384  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHKeyPath
	I0929 11:31:17.623851  595895 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0929 11:31:17.623870  595895 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0929 11:31:17.623891  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHHostname
	I0929 11:31:17.623910  595895 main.go:141] libmachine: (addons-214441) Calling .DriverName
	I0929 11:31:17.623977  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHUsername
	I0929 11:31:17.623991  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:31:17.624284  595895 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0929 11:31:17.624300  595895 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0929 11:31:17.624317  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHHostname
	I0929 11:31:17.624324  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHPort
	I0929 11:31:17.624330  595895 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21654-591397/.minikube/machines/addons-214441/id_rsa Username:docker}
	I0929 11:31:17.624655  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:31:17.624690  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHKeyPath
	I0929 11:31:17.625088  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHUsername
	I0929 11:31:17.625297  595895 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21654-591397/.minikube/machines/addons-214441/id_rsa Username:docker}
	I0929 11:31:17.626099  595895 main.go:141] libmachine: (addons-214441) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:9c:d8", ip: ""} in network mk-addons-214441: {Iface:virbr1 ExpiryTime:2025-09-29 12:30:43 +0000 UTC Type:0 Mac:52:54:00:98:9c:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:addons-214441 Clientid:01:52:54:00:98:9c:d8}
	I0929 11:31:17.626182  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined IP address 192.168.39.76 and MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:31:17.626247  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:31:17.626251  595895 main.go:141] libmachine: (addons-214441) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:9c:d8", ip: ""} in network mk-addons-214441: {Iface:virbr1 ExpiryTime:2025-09-29 12:30:43 +0000 UTC Type:0 Mac:52:54:00:98:9c:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:addons-214441 Clientid:01:52:54:00:98:9c:d8}
	I0929 11:31:17.626597  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined IP address 192.168.39.76 and MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:31:17.626789  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:31:17.626890  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHPort
	I0929 11:31:17.627091  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHPort
	I0929 11:31:17.627238  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHKeyPath
	I0929 11:31:17.627284  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHKeyPath
	I0929 11:31:17.627374  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHUsername
	I0929 11:31:17.627541  595895 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21654-591397/.minikube/machines/addons-214441/id_rsa Username:docker}
	I0929 11:31:17.627907  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:31:17.627938  595895 main.go:141] libmachine: (addons-214441) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:9c:d8", ip: ""} in network mk-addons-214441: {Iface:virbr1 ExpiryTime:2025-09-29 12:30:43 +0000 UTC Type:0 Mac:52:54:00:98:9c:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:addons-214441 Clientid:01:52:54:00:98:9c:d8}
	I0929 11:31:17.627949  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined IP address 192.168.39.76 and MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:31:17.627979  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHUsername
	I0929 11:31:17.628066  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:31:17.628081  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHPort
	I0929 11:31:17.628268  595895 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21654-591397/.minikube/machines/addons-214441/id_rsa Username:docker}
	I0929 11:31:17.628308  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHKeyPath
	I0929 11:31:17.628533  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHUsername
	I0929 11:31:17.628572  595895 main.go:141] libmachine: (addons-214441) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:9c:d8", ip: ""} in network mk-addons-214441: {Iface:virbr1 ExpiryTime:2025-09-29 12:30:43 +0000 UTC Type:0 Mac:52:54:00:98:9c:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:addons-214441 Clientid:01:52:54:00:98:9c:d8}
	I0929 11:31:17.628735  595895 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21654-591397/.minikube/machines/addons-214441/id_rsa Username:docker}
	I0929 11:31:17.628848  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined IP address 192.168.39.76 and MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:31:17.629214  595895 main.go:141] libmachine: (addons-214441) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:9c:d8", ip: ""} in network mk-addons-214441: {Iface:virbr1 ExpiryTime:2025-09-29 12:30:43 +0000 UTC Type:0 Mac:52:54:00:98:9c:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:addons-214441 Clientid:01:52:54:00:98:9c:d8}
	I0929 11:31:17.629249  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined IP address 192.168.39.76 and MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:31:17.629266  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHPort
	I0929 11:31:17.629512  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHKeyPath
	I0929 11:31:17.629592  595895 main.go:141] libmachine: (addons-214441) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:9c:d8", ip: ""} in network mk-addons-214441: {Iface:virbr1 ExpiryTime:2025-09-29 12:30:43 +0000 UTC Type:0 Mac:52:54:00:98:9c:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:addons-214441 Clientid:01:52:54:00:98:9c:d8}
	I0929 11:31:17.629606  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined IP address 192.168.39.76 and MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:31:17.629764  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHUsername
	I0929 11:31:17.629861  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHPort
	I0929 11:31:17.630008  595895 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21654-591397/.minikube/machines/addons-214441/id_rsa Username:docker}
	I0929 11:31:17.630062  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHKeyPath
	I0929 11:31:17.630142  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHPort
	I0929 11:31:17.630197  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHUsername
	I0929 11:31:17.630311  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHKeyPath
	I0929 11:31:17.630370  595895 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21654-591397/.minikube/machines/addons-214441/id_rsa Username:docker}
	I0929 11:31:17.630910  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHUsername
	I0929 11:31:17.631305  595895 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21654-591397/.minikube/machines/addons-214441/id_rsa Username:docker}
	I0929 11:31:17.631821  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:31:17.632272  595895 main.go:141] libmachine: (addons-214441) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:9c:d8", ip: ""} in network mk-addons-214441: {Iface:virbr1 ExpiryTime:2025-09-29 12:30:43 +0000 UTC Type:0 Mac:52:54:00:98:9c:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:addons-214441 Clientid:01:52:54:00:98:9c:d8}
	I0929 11:31:17.632296  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined IP address 192.168.39.76 and MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:31:17.632442  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:31:17.632503  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHPort
	I0929 11:31:17.632710  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:31:17.632789  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHKeyPath
	I0929 11:31:17.633084  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHUsername
	I0929 11:31:17.633162  595895 main.go:141] libmachine: (addons-214441) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:9c:d8", ip: ""} in network mk-addons-214441: {Iface:virbr1 ExpiryTime:2025-09-29 12:30:43 +0000 UTC Type:0 Mac:52:54:00:98:9c:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:addons-214441 Clientid:01:52:54:00:98:9c:d8}
	I0929 11:31:17.633176  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined IP address 192.168.39.76 and MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:31:17.633207  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:31:17.633228  595895 main.go:141] libmachine: (addons-214441) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:9c:d8", ip: ""} in network mk-addons-214441: {Iface:virbr1 ExpiryTime:2025-09-29 12:30:43 +0000 UTC Type:0 Mac:52:54:00:98:9c:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:addons-214441 Clientid:01:52:54:00:98:9c:d8}
	I0929 11:31:17.633242  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined IP address 192.168.39.76 and MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:31:17.633391  595895 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21654-591397/.minikube/machines/addons-214441/id_rsa Username:docker}
	I0929 11:31:17.633435  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHPort
	I0929 11:31:17.633557  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHPort
	I0929 11:31:17.633619  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHKeyPath
	I0929 11:31:17.633759  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHKeyPath
	I0929 11:31:17.633793  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHUsername
	I0929 11:31:17.634042  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHUsername
	I0929 11:31:17.634042  595895 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21654-591397/.minikube/machines/addons-214441/id_rsa Username:docker}
	I0929 11:31:17.634131  595895 main.go:141] libmachine: (addons-214441) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:9c:d8", ip: ""} in network mk-addons-214441: {Iface:virbr1 ExpiryTime:2025-09-29 12:30:43 +0000 UTC Type:0 Mac:52:54:00:98:9c:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:addons-214441 Clientid:01:52:54:00:98:9c:d8}
	I0929 11:31:17.634164  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined IP address 192.168.39.76 and MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:31:17.634219  595895 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21654-591397/.minikube/machines/addons-214441/id_rsa Username:docker}
	I0929 11:31:17.634716  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHPort
	I0929 11:31:17.634894  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHKeyPath
	I0929 11:31:17.635088  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHUsername
	I0929 11:31:17.635265  595895 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21654-591397/.minikube/machines/addons-214441/id_rsa Username:docker}
	W0929 11:31:17.919750  595895 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:44698->192.168.39.76:22: read: connection reset by peer
	I0929 11:31:17.919798  595895 retry.go:31] will retry after 127.603101ms: ssh: handshake failed: read tcp 192.168.39.1:44698->192.168.39.76:22: read: connection reset by peer
	W0929 11:31:17.927998  595895 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:44716->192.168.39.76:22: read: connection reset by peer
	I0929 11:31:17.928034  595895 retry.go:31] will retry after 352.316454ms: ssh: handshake failed: read tcp 192.168.39.1:44716->192.168.39.76:22: read: connection reset by peer
	I0929 11:31:18.834850  595895 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0929 11:31:18.834892  595895 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0929 11:31:18.867206  595895 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0929 11:31:18.867237  595895 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0929 11:31:18.998018  595895 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0929 11:31:19.019969  595895 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (1.57851512s)
	I0929 11:31:19.019988  595895 ssh_runner.go:235] Completed: sudo systemctl daemon-reload: (1.56567428s)
	I0929 11:31:19.020058  595895 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0929 11:31:19.020195  595895 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0929 11:31:19.047383  595895 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0929 11:31:19.178551  595895 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0929 11:31:19.194460  595895 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0929 11:31:19.203493  595895 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 11:31:19.224634  595895 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0929 11:31:19.236908  595895 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I0929 11:31:19.236937  595895 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I0929 11:31:19.339094  595895 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0929 11:31:19.470368  595895 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0929 11:31:19.470407  595895 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0929 11:31:19.482955  595895 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0929 11:31:19.507279  595895 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0929 11:31:19.533452  595895 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0929 11:31:19.533481  595895 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0929 11:31:19.580275  595895 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0929 11:31:19.580310  595895 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0929 11:31:19.612191  595895 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0929 11:31:19.612228  595895 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0929 11:31:19.656222  595895 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I0929 11:31:19.656250  595895 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0929 11:31:19.707608  595895 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I0929 11:31:19.720943  595895 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 11:31:19.949642  595895 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0929 11:31:19.949675  595895 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0929 11:31:20.010236  595895 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0929 11:31:20.010269  595895 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0929 11:31:20.143152  595895 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0929 11:31:20.143179  595895 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0929 11:31:20.164194  595895 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0929 11:31:20.164223  595895 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0929 11:31:20.178619  595895 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0929 11:31:20.178652  595895 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0929 11:31:20.352326  595895 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0929 11:31:20.352354  595895 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0929 11:31:20.399905  595895 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0929 11:31:20.399935  595895 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0929 11:31:20.528800  595895 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0929 11:31:20.554026  595895 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0929 11:31:20.608085  595895 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0929 11:31:20.608132  595895 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0929 11:31:20.855879  595895 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0929 11:31:20.901072  595895 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0929 11:31:20.901124  595895 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0929 11:31:21.046874  595895 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0929 11:31:21.046903  595895 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0929 11:31:21.279957  595895 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0929 11:31:21.279985  595895 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0929 11:31:21.494633  595895 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0929 11:31:21.494662  595895 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0929 11:31:21.896279  595895 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0929 11:31:22.355612  595895 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0929 11:31:22.355644  595895 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0929 11:31:23.136046  595895 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0929 11:31:23.136083  595895 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0929 11:31:23.742895  595895 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0929 11:31:23.742921  595895 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0929 11:31:24.397559  595895 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0929 11:31:24.397588  595895 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0929 11:31:24.806696  595895 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0929 11:31:24.806729  595895 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0929 11:31:25.028630  595895 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0929 11:31:25.028675  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHHostname
	I0929 11:31:25.032868  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:31:25.033494  595895 main.go:141] libmachine: (addons-214441) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:9c:d8", ip: ""} in network mk-addons-214441: {Iface:virbr1 ExpiryTime:2025-09-29 12:30:43 +0000 UTC Type:0 Mac:52:54:00:98:9c:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:addons-214441 Clientid:01:52:54:00:98:9c:d8}
	I0929 11:31:25.033526  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined IP address 192.168.39.76 and MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:31:25.033760  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHPort
	I0929 11:31:25.034027  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHKeyPath
	I0929 11:31:25.034259  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHUsername
	I0929 11:31:25.034422  595895 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21654-591397/.minikube/machines/addons-214441/id_rsa Username:docker}
	I0929 11:31:25.610330  595895 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0929 11:31:25.954809  595895 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0929 11:31:26.260607  595895 addons.go:238] Setting addon gcp-auth=true in "addons-214441"
	I0929 11:31:26.260695  595895 host.go:66] Checking if "addons-214441" exists ...
	I0929 11:31:26.261024  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:31:26.261068  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:31:26.276135  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36421
	I0929 11:31:26.276726  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:31:26.277323  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:31:26.277354  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:31:26.277924  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:31:26.278456  595895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:31:26.278490  595895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:31:26.293277  595895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40243
	I0929 11:31:26.293786  595895 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:31:26.294319  595895 main.go:141] libmachine: Using API Version  1
	I0929 11:31:26.294344  595895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:31:26.294858  595895 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:31:26.295136  595895 main.go:141] libmachine: (addons-214441) Calling .GetState
	I0929 11:31:26.297279  595895 main.go:141] libmachine: (addons-214441) Calling .DriverName
	I0929 11:31:26.297583  595895 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0929 11:31:26.297612  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHHostname
	I0929 11:31:26.301409  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:31:26.302065  595895 main.go:141] libmachine: (addons-214441) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:9c:d8", ip: ""} in network mk-addons-214441: {Iface:virbr1 ExpiryTime:2025-09-29 12:30:43 +0000 UTC Type:0 Mac:52:54:00:98:9c:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:addons-214441 Clientid:01:52:54:00:98:9c:d8}
	I0929 11:31:26.302093  595895 main.go:141] libmachine: (addons-214441) DBG | domain addons-214441 has defined IP address 192.168.39.76 and MAC address 52:54:00:98:9c:d8 in network mk-addons-214441
	I0929 11:31:26.302272  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHPort
	I0929 11:31:26.302474  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHKeyPath
	I0929 11:31:26.302636  595895 main.go:141] libmachine: (addons-214441) Calling .GetSSHUsername
	I0929 11:31:26.302830  595895 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21654-591397/.minikube/machines/addons-214441/id_rsa Username:docker}
	I0929 11:31:26.648618  595895 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.65053686s)
	I0929 11:31:26.648643  595895 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (7.628556534s)
	I0929 11:31:26.648693  595895 main.go:141] libmachine: Making call to close driver server
	I0929 11:31:26.648703  595895 main.go:141] libmachine: (addons-214441) Calling .Close
	I0929 11:31:26.648707  595895 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (7.628486823s)
	I0929 11:31:26.648740  595895 start.go:976] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0929 11:31:26.648855  595895 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (7.601423652s)
	I0929 11:31:26.648889  595895 main.go:141] libmachine: Making call to close driver server
	I0929 11:31:26.648898  595895 main.go:141] libmachine: (addons-214441) Calling .Close
	I0929 11:31:26.649041  595895 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:31:26.649056  595895 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:31:26.649066  595895 main.go:141] libmachine: Making call to close driver server
	I0929 11:31:26.649073  595895 main.go:141] libmachine: (addons-214441) Calling .Close
	I0929 11:31:26.649181  595895 main.go:141] libmachine: (addons-214441) DBG | Closing plugin on server side
	I0929 11:31:26.649214  595895 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:31:26.649225  595895 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:31:26.649256  595895 main.go:141] libmachine: Making call to close driver server
	I0929 11:31:26.649265  595895 main.go:141] libmachine: (addons-214441) Calling .Close
	I0929 11:31:26.649555  595895 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:31:26.649585  595895 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:31:26.649698  595895 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:31:26.649728  595895 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:31:26.649741  595895 node_ready.go:35] waiting up to 6m0s for node "addons-214441" to be "Ready" ...
	I0929 11:31:26.649625  595895 main.go:141] libmachine: (addons-214441) DBG | Closing plugin on server side
	I0929 11:31:26.649665  595895 main.go:141] libmachine: (addons-214441) DBG | Closing plugin on server side
	I0929 11:31:26.797678  595895 node_ready.go:49] node "addons-214441" is "Ready"
	I0929 11:31:26.797712  595895 node_ready.go:38] duration metric: took 147.94134ms for node "addons-214441" to be "Ready" ...
	I0929 11:31:26.797735  595895 api_server.go:52] waiting for apiserver process to appear ...
	I0929 11:31:26.797797  595895 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0929 11:31:27.078868  595895 main.go:141] libmachine: Making call to close driver server
	I0929 11:31:27.078896  595895 main.go:141] libmachine: (addons-214441) Calling .Close
	I0929 11:31:27.079284  595895 main.go:141] libmachine: (addons-214441) DBG | Closing plugin on server side
	I0929 11:31:27.079351  595895 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:31:27.079372  595895 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:31:27.220384  595895 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-214441" context rescaled to 1 replicas
	I0929 11:31:30.522194  595895 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (11.34358993s)
	I0929 11:31:30.522263  595895 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (11.327765304s)
	I0929 11:31:30.522284  595895 main.go:141] libmachine: Making call to close driver server
	I0929 11:31:30.522297  595895 main.go:141] libmachine: (addons-214441) Calling .Close
	I0929 11:31:30.522297  595895 main.go:141] libmachine: Making call to close driver server
	I0929 11:31:30.522308  595895 main.go:141] libmachine: (addons-214441) Calling .Close
	I0929 11:31:30.522336  595895 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (11.318803941s)
	I0929 11:31:30.522386  595895 main.go:141] libmachine: Making call to close driver server
	I0929 11:31:30.522398  595895 main.go:141] libmachine: (addons-214441) Calling .Close
	I0929 11:31:30.522641  595895 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:31:30.522658  595895 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:31:30.522685  595895 main.go:141] libmachine: Making call to close driver server
	I0929 11:31:30.522695  595895 main.go:141] libmachine: (addons-214441) Calling .Close
	I0929 11:31:30.522794  595895 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:31:30.522804  595895 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:31:30.522813  595895 main.go:141] libmachine: Making call to close driver server
	I0929 11:31:30.522819  595895 main.go:141] libmachine: (addons-214441) Calling .Close
	I0929 11:31:30.522874  595895 main.go:141] libmachine: (addons-214441) DBG | Closing plugin on server side
	I0929 11:31:30.522863  595895 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:31:30.522905  595895 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:31:30.522914  595895 main.go:141] libmachine: Making call to close driver server
	I0929 11:31:30.522922  595895 main.go:141] libmachine: (addons-214441) Calling .Close
	I0929 11:31:30.522952  595895 main.go:141] libmachine: (addons-214441) DBG | Closing plugin on server side
	I0929 11:31:30.522984  595895 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:31:30.522990  595895 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:31:30.523183  595895 main.go:141] libmachine: (addons-214441) DBG | Closing plugin on server side
	I0929 11:31:30.523188  595895 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:31:30.523205  595895 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:31:30.523212  595895 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:31:30.523216  595895 addons.go:479] Verifying addon ingress=true in "addons-214441"
	I0929 11:31:30.523222  595895 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:31:30.527182  595895 out.go:179] * Verifying ingress addon...
	I0929 11:31:30.529738  595895 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0929 11:31:30.708830  595895 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0929 11:31:30.708859  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:31.235125  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:31.629964  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:32.068126  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:32.586294  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:33.055440  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:33.661344  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:33.865322  595895 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (14.640641229s)
	I0929 11:31:33.865361  595895 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (14.526214451s)
	I0929 11:31:33.865396  595895 main.go:141] libmachine: Making call to close driver server
	I0929 11:31:33.865407  595895 main.go:141] libmachine: Making call to close driver server
	I0929 11:31:33.865413  595895 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (14.382417731s)
	I0929 11:31:33.865425  595895 main.go:141] libmachine: (addons-214441) Calling .Close
	I0929 11:31:33.865448  595895 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (14.358144157s)
	I0929 11:31:33.865456  595895 main.go:141] libmachine: Making call to close driver server
	I0929 11:31:33.865470  595895 main.go:141] libmachine: (addons-214441) Calling .Close
	I0929 11:31:33.865472  595895 main.go:141] libmachine: Making call to close driver server
	I0929 11:31:33.865527  595895 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (14.157883934s)
	I0929 11:31:33.865528  595895 main.go:141] libmachine: (addons-214441) Calling .Close
	I0929 11:31:33.865545  595895 main.go:141] libmachine: Making call to close driver server
	I0929 11:31:33.865554  595895 main.go:141] libmachine: (addons-214441) Calling .Close
	I0929 11:31:33.865410  595895 main.go:141] libmachine: (addons-214441) Calling .Close
	I0929 11:31:33.865659  595895 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (14.144676501s)
	W0929 11:31:33.865707  595895 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:31:33.865740  595895 retry.go:31] will retry after 127.952259ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:31:33.865790  595895 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (13.336965067s)
	I0929 11:31:33.865796  595895 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:31:33.865807  595895 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:31:33.865810  595895 main.go:141] libmachine: Making call to close driver server
	I0929 11:31:33.865818  595895 main.go:141] libmachine: Making call to close driver server
	I0929 11:31:33.865821  595895 main.go:141] libmachine: (addons-214441) Calling .Close
	I0929 11:31:33.865826  595895 main.go:141] libmachine: (addons-214441) Calling .Close
	I0929 11:31:33.865864  595895 main.go:141] libmachine: (addons-214441) DBG | Closing plugin on server side
	I0929 11:31:33.865883  595895 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:31:33.865895  595895 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:31:33.865895  595895 main.go:141] libmachine: (addons-214441) DBG | Closing plugin on server side
	I0929 11:31:33.865906  595895 main.go:141] libmachine: Making call to close driver server
	I0929 11:31:33.865922  595895 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:31:33.865928  595895 main.go:141] libmachine: (addons-214441) Calling .Close
	I0929 11:31:33.865931  595895 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:31:33.865939  595895 main.go:141] libmachine: Making call to close driver server
	I0929 11:31:33.865945  595895 main.go:141] libmachine: (addons-214441) Calling .Close
	I0929 11:31:33.865960  595895 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (13.311901558s)
	I0929 11:31:33.865978  595895 main.go:141] libmachine: Making call to close driver server
	I0929 11:31:33.865986  595895 main.go:141] libmachine: (addons-214441) Calling .Close
	I0929 11:31:33.866077  595895 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (13.010152282s)
	I0929 11:31:33.866096  595895 main.go:141] libmachine: Making call to close driver server
	I0929 11:31:33.866124  595895 main.go:141] libmachine: (addons-214441) Calling .Close
	I0929 11:31:33.866162  595895 main.go:141] libmachine: (addons-214441) DBG | Closing plugin on server side
	I0929 11:31:33.866187  595895 main.go:141] libmachine: (addons-214441) DBG | Closing plugin on server side
	I0929 11:31:33.866214  595895 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:31:33.866223  595895 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:31:33.866230  595895 main.go:141] libmachine: Making call to close driver server
	I0929 11:31:33.866237  595895 main.go:141] libmachine: (addons-214441) Calling .Close
	I0929 11:31:33.866283  595895 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (11.969964695s)
	W0929 11:31:33.866347  595895 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	Warning: unrecognized format "int64"
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0929 11:31:33.866370  595895 retry.go:31] will retry after 213.926415ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	Warning: unrecognized format "int64"
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0929 11:31:33.866587  595895 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:31:33.866618  595895 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:31:33.866622  595895 main.go:141] libmachine: (addons-214441) DBG | Closing plugin on server side
	I0929 11:31:33.866627  595895 main.go:141] libmachine: Making call to close driver server
	I0929 11:31:33.866630  595895 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:31:33.866636  595895 main.go:141] libmachine: (addons-214441) Calling .Close
	I0929 11:31:33.866640  595895 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:31:33.866651  595895 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:31:33.866662  595895 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:31:33.866606  595895 main.go:141] libmachine: (addons-214441) DBG | Closing plugin on server side
	I0929 11:31:33.866736  595895 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:31:33.866752  595895 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:31:33.866766  595895 main.go:141] libmachine: Making call to close driver server
	I0929 11:31:33.866780  595895 main.go:141] libmachine: (addons-214441) Calling .Close
	I0929 11:31:33.866875  595895 main.go:141] libmachine: (addons-214441) DBG | Closing plugin on server side
	I0929 11:31:33.866910  595895 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:31:33.866925  595895 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:31:33.867202  595895 main.go:141] libmachine: (addons-214441) DBG | Closing plugin on server side
	I0929 11:31:33.867264  595895 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:31:33.867284  595895 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:31:33.867303  595895 main.go:141] libmachine: Making call to close driver server
	I0929 11:31:33.867339  595895 main.go:141] libmachine: (addons-214441) Calling .Close
	I0929 11:31:33.867618  595895 main.go:141] libmachine: (addons-214441) DBG | Closing plugin on server side
	I0929 11:31:33.867761  595895 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:31:33.867769  595895 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:31:33.867778  595895 addons.go:479] Verifying addon registry=true in "addons-214441"
	I0929 11:31:33.868269  595895 main.go:141] libmachine: (addons-214441) DBG | Closing plugin on server side
	I0929 11:31:33.868300  595895 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:31:33.868305  595895 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:31:33.868451  595895 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:31:33.868463  595895 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:31:33.868472  595895 main.go:141] libmachine: Making call to close driver server
	I0929 11:31:33.868479  595895 main.go:141] libmachine: (addons-214441) Calling .Close
	I0929 11:31:33.869037  595895 main.go:141] libmachine: (addons-214441) DBG | Closing plugin on server side
	I0929 11:31:33.869070  595895 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:31:33.869076  595895 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:31:33.869084  595895 addons.go:479] Verifying addon metrics-server=true in "addons-214441"
	I0929 11:31:33.869798  595895 main.go:141] libmachine: (addons-214441) DBG | Closing plugin on server side
	I0929 11:31:33.869839  595895 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:31:33.869847  595895 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:31:33.869975  595895 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:31:33.870031  595895 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:31:33.871564  595895 out.go:179] * Verifying registry addon...
	I0929 11:31:33.872479  595895 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-214441 service yakd-dashboard -n yakd-dashboard
	
	I0929 11:31:33.874294  595895 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0929 11:31:33.993863  595895 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0929 11:31:33.993900  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:33.994009  595895 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 11:31:34.081538  595895 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0929 11:31:34.115447  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:34.146570  595895 main.go:141] libmachine: Making call to close driver server
	I0929 11:31:34.146609  595895 main.go:141] libmachine: (addons-214441) Calling .Close
	I0929 11:31:34.146947  595895 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:31:34.146967  595895 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:31:34.413578  595895 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (8.803181451s)
	I0929 11:31:34.413616  595895 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (8.116003731s)
	I0929 11:31:34.413656  595895 main.go:141] libmachine: Making call to close driver server
	I0929 11:31:34.413669  595895 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (7.615843233s)
	I0929 11:31:34.413709  595895 api_server.go:72] duration metric: took 16.972266985s to wait for apiserver process to appear ...
	I0929 11:31:34.413722  595895 api_server.go:88] waiting for apiserver healthz status ...
	I0929 11:31:34.413750  595895 api_server.go:253] Checking apiserver healthz at https://192.168.39.76:8443/healthz ...
	I0929 11:31:34.413675  595895 main.go:141] libmachine: (addons-214441) Calling .Close
	I0929 11:31:34.414213  595895 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:31:34.414230  595895 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:31:34.414254  595895 main.go:141] libmachine: Making call to close driver server
	I0929 11:31:34.414261  595895 main.go:141] libmachine: (addons-214441) Calling .Close
	I0929 11:31:34.414511  595895 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:31:34.414529  595895 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:31:34.414543  595895 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-214441"
	I0929 11:31:34.415286  595895 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I0929 11:31:34.416180  595895 out.go:179] * Verifying csi-hostpath-driver addon...
	I0929 11:31:34.417833  595895 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0929 11:31:34.418933  595895 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0929 11:31:34.419343  595895 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0929 11:31:34.419365  595895 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0929 11:31:34.428017  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:34.435805  595895 api_server.go:279] https://192.168.39.76:8443/healthz returned 200:
	ok
	I0929 11:31:34.443092  595895 api_server.go:141] control plane version: v1.34.0
	I0929 11:31:34.443139  595895 api_server.go:131] duration metric: took 29.409177ms to wait for apiserver health ...
	I0929 11:31:34.443150  595895 system_pods.go:43] waiting for kube-system pods to appear ...
	I0929 11:31:34.495447  595895 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0929 11:31:34.495473  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:34.527406  595895 system_pods.go:59] 20 kube-system pods found
	I0929 11:31:34.527452  595895 system_pods.go:61] "amd-gpu-device-plugin-7jx7f" [97d8a167-36be-44b5-b99c-2b55db99df3e] Running
	I0929 11:31:34.527458  595895 system_pods.go:61] "coredns-66bc5c9577-fkh52" [bd9cc142-39db-4ed9-9ecc-3d270499136c] Running
	I0929 11:31:34.527463  595895 system_pods.go:61] "coredns-66bc5c9577-sgnb2" [343874b8-6c4d-4e36-8ebf-a379e3a93a98] Running
	I0929 11:31:34.527471  595895 system_pods.go:61] "csi-hostpath-attacher-0" [27d9af25-8d41-4c37-9359-c7bd4f88f09f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0929 11:31:34.527475  595895 system_pods.go:61] "csi-hostpath-resizer-0" [4f24e046-d5b2-41ff-a051-b0a572bf9348] Pending
	I0929 11:31:34.527484  595895 system_pods.go:61] "csi-hostpathplugin-8279f" [3cc6db25-521c-4711-9d36-e22ab6d16249] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0929 11:31:34.527490  595895 system_pods.go:61] "etcd-addons-214441" [1710b264-ccec-4054-8a1c-7d8e5ce49163] Running
	I0929 11:31:34.527494  595895 system_pods.go:61] "kube-apiserver-addons-214441" [302fdd61-6c51-4e6e-a1af-7856ccb9f2ca] Running
	I0929 11:31:34.527502  595895 system_pods.go:61] "kube-controller-manager-addons-214441" [3e9c3a44-d14e-4335-a141-f7c648e6159d] Running
	I0929 11:31:34.527507  595895 system_pods.go:61] "kube-ingress-dns-minikube" [12920e9d-debd-4783-9f79-1c3a345e576e] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0929 11:31:34.527513  595895 system_pods.go:61] "kube-proxy-d9fnb" [229af565-3300-4cb4-8289-f3d9b4a9af81] Running
	I0929 11:31:34.527520  595895 system_pods.go:61] "kube-scheduler-addons-214441" [bcaec156-9259-4787-9d02-01a2203b43ac] Running
	I0929 11:31:34.527524  595895 system_pods.go:61] "metrics-server-85b7d694d7-zlrv7" [003bc9b9-be00-4e08-8af7-3443d0502181] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0929 11:31:34.527533  595895 system_pods.go:61] "nvidia-device-plugin-daemonset-x7b8m" [b96ad59d-d30d-4437-a5c7-ce0c5fc69348] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0929 11:31:34.527541  595895 system_pods.go:61] "registry-66898fdd98-d7zx7" [e0282924-ef3d-48eb-8906-ea10f183b39e] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0929 11:31:34.527547  595895 system_pods.go:61] "registry-creds-764b6fb674-td8pw" [c6001777-2f97-49d8-972b-27d373557795] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0929 11:31:34.527557  595895 system_pods.go:61] "registry-proxy-grb7m" [d79830e7-a640-46a7-9b07-38e39aceac96] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0929 11:31:34.527562  595895 system_pods.go:61] "snapshot-controller-7d9fbc56b8-pw4g9" [74b07f42-da97-4ad5-8fdb-748ab5cfbde2] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0929 11:31:34.527571  595895 system_pods.go:61] "snapshot-controller-7d9fbc56b8-wvh2l" [987a4a45-4d77-43ac-816c-ebc08faf51bc] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0929 11:31:34.527575  595895 system_pods.go:61] "storage-provisioner" [e1a79041-edc7-4f8e-96cb-8b3566893a0e] Running
	I0929 11:31:34.527582  595895 system_pods.go:74] duration metric: took 84.42539ms to wait for pod list to return data ...
	I0929 11:31:34.527594  595895 default_sa.go:34] waiting for default service account to be created ...
	I0929 11:31:34.549252  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:34.556947  595895 default_sa.go:45] found service account: "default"
	I0929 11:31:34.556977  595895 default_sa.go:55] duration metric: took 29.376735ms for default service account to be created ...
	I0929 11:31:34.556988  595895 system_pods.go:116] waiting for k8s-apps to be running ...
	I0929 11:31:34.596290  595895 system_pods.go:86] 20 kube-system pods found
	I0929 11:31:34.596322  595895 system_pods.go:89] "amd-gpu-device-plugin-7jx7f" [97d8a167-36be-44b5-b99c-2b55db99df3e] Running
	I0929 11:31:34.596330  595895 system_pods.go:89] "coredns-66bc5c9577-fkh52" [bd9cc142-39db-4ed9-9ecc-3d270499136c] Running
	I0929 11:31:34.596334  595895 system_pods.go:89] "coredns-66bc5c9577-sgnb2" [343874b8-6c4d-4e36-8ebf-a379e3a93a98] Running
	I0929 11:31:34.596343  595895 system_pods.go:89] "csi-hostpath-attacher-0" [27d9af25-8d41-4c37-9359-c7bd4f88f09f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0929 11:31:34.596349  595895 system_pods.go:89] "csi-hostpath-resizer-0" [4f24e046-d5b2-41ff-a051-b0a572bf9348] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0929 11:31:34.596357  595895 system_pods.go:89] "csi-hostpathplugin-8279f" [3cc6db25-521c-4711-9d36-e22ab6d16249] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0929 11:31:34.596361  595895 system_pods.go:89] "etcd-addons-214441" [1710b264-ccec-4054-8a1c-7d8e5ce49163] Running
	I0929 11:31:34.596365  595895 system_pods.go:89] "kube-apiserver-addons-214441" [302fdd61-6c51-4e6e-a1af-7856ccb9f2ca] Running
	I0929 11:31:34.596369  595895 system_pods.go:89] "kube-controller-manager-addons-214441" [3e9c3a44-d14e-4335-a141-f7c648e6159d] Running
	I0929 11:31:34.596375  595895 system_pods.go:89] "kube-ingress-dns-minikube" [12920e9d-debd-4783-9f79-1c3a345e576e] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0929 11:31:34.596381  595895 system_pods.go:89] "kube-proxy-d9fnb" [229af565-3300-4cb4-8289-f3d9b4a9af81] Running
	I0929 11:31:34.596385  595895 system_pods.go:89] "kube-scheduler-addons-214441" [bcaec156-9259-4787-9d02-01a2203b43ac] Running
	I0929 11:31:34.596390  595895 system_pods.go:89] "metrics-server-85b7d694d7-zlrv7" [003bc9b9-be00-4e08-8af7-3443d0502181] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0929 11:31:34.596398  595895 system_pods.go:89] "nvidia-device-plugin-daemonset-x7b8m" [b96ad59d-d30d-4437-a5c7-ce0c5fc69348] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0929 11:31:34.596404  595895 system_pods.go:89] "registry-66898fdd98-d7zx7" [e0282924-ef3d-48eb-8906-ea10f183b39e] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0929 11:31:34.596409  595895 system_pods.go:89] "registry-creds-764b6fb674-td8pw" [c6001777-2f97-49d8-972b-27d373557795] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0929 11:31:34.596413  595895 system_pods.go:89] "registry-proxy-grb7m" [d79830e7-a640-46a7-9b07-38e39aceac96] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0929 11:31:34.596421  595895 system_pods.go:89] "snapshot-controller-7d9fbc56b8-pw4g9" [74b07f42-da97-4ad5-8fdb-748ab5cfbde2] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0929 11:31:34.596427  595895 system_pods.go:89] "snapshot-controller-7d9fbc56b8-wvh2l" [987a4a45-4d77-43ac-816c-ebc08faf51bc] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0929 11:31:34.596430  595895 system_pods.go:89] "storage-provisioner" [e1a79041-edc7-4f8e-96cb-8b3566893a0e] Running
	I0929 11:31:34.596439  595895 system_pods.go:126] duration metric: took 39.444621ms to wait for k8s-apps to be running ...
	I0929 11:31:34.596450  595895 system_svc.go:44] waiting for kubelet service to be running ....
	I0929 11:31:34.596507  595895 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0929 11:31:34.638029  595895 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0929 11:31:34.638063  595895 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0929 11:31:34.896745  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:35.000193  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:35.038316  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:35.057490  595895 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0929 11:31:35.057521  595895 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0929 11:31:35.300242  595895 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0929 11:31:35.379546  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:35.428677  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:35.535091  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:35.881406  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:35.938231  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:36.039311  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:36.382155  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:36.425663  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:36.535684  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:36.886954  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:36.927490  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:37.044975  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:37.382165  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:37.431026  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:37.547302  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:37.920673  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:37.944368  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:38.063651  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:38.330176  595895 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (4.336121933s)
	W0929 11:31:38.330254  595895 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:31:38.330284  595895 retry.go:31] will retry after 312.007159ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:31:38.330290  595895 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.248696545s)
	I0929 11:31:38.330341  595895 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (3.73381029s)
	I0929 11:31:38.330367  595895 system_svc.go:56] duration metric: took 3.733914032s WaitForService to wait for kubelet
	I0929 11:31:38.330377  595895 kubeadm.go:578] duration metric: took 20.888935766s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0929 11:31:38.330403  595895 node_conditions.go:102] verifying NodePressure condition ...
	I0929 11:31:38.330343  595895 main.go:141] libmachine: Making call to close driver server
	I0929 11:31:38.330449  595895 main.go:141] libmachine: (addons-214441) Calling .Close
	I0929 11:31:38.330448  595895 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (3.030164486s)
	I0929 11:31:38.330495  595895 main.go:141] libmachine: Making call to close driver server
	I0929 11:31:38.330509  595895 main.go:141] libmachine: (addons-214441) Calling .Close
	I0929 11:31:38.330817  595895 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:31:38.330832  595895 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:31:38.330841  595895 main.go:141] libmachine: Making call to close driver server
	I0929 11:31:38.330848  595895 main.go:141] libmachine: (addons-214441) Calling .Close
	I0929 11:31:38.330851  595895 main.go:141] libmachine: (addons-214441) DBG | Closing plugin on server side
	I0929 11:31:38.330882  595895 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:31:38.330895  595895 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:31:38.330903  595895 main.go:141] libmachine: Making call to close driver server
	I0929 11:31:38.330910  595895 main.go:141] libmachine: (addons-214441) Calling .Close
	I0929 11:31:38.331221  595895 main.go:141] libmachine: (addons-214441) DBG | Closing plugin on server side
	I0929 11:31:38.331223  595895 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:31:38.331238  595895 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:31:38.331251  595895 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:31:38.331258  595895 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:31:38.332465  595895 addons.go:479] Verifying addon gcp-auth=true in "addons-214441"
	I0929 11:31:38.334695  595895 out.go:179] * Verifying gcp-auth addon...
	I0929 11:31:38.336858  595895 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0929 11:31:38.341614  595895 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0929 11:31:38.341645  595895 node_conditions.go:123] node cpu capacity is 2
	I0929 11:31:38.341662  595895 node_conditions.go:105] duration metric: took 11.25287ms to run NodePressure ...
	I0929 11:31:38.341688  595895 start.go:241] waiting for startup goroutines ...
	I0929 11:31:38.343873  595895 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0929 11:31:38.343896  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:38.381193  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:38.423947  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:38.537472  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:38.642514  595895 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 11:31:38.843272  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:38.944959  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:38.945123  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:39.033029  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:39.342350  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:39.380435  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:39.424230  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:39.537307  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:39.645310  595895 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.002737784s)
	W0929 11:31:39.645357  595895 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:31:39.645385  595895 retry.go:31] will retry after 298.904966ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:31:39.841477  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:39.879072  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:39.922915  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:39.945025  595895 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 11:31:40.034681  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:40.343272  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:40.382403  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:40.422942  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:40.539442  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:40.844610  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:40.879893  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:40.924951  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:41.033826  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:41.124246  595895 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.179166796s)
	W0929 11:31:41.124315  595895 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:31:41.124339  595895 retry.go:31] will retry after 649.538473ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:31:41.343005  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:41.380641  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:41.425734  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:41.533709  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:41.774560  595895 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 11:31:41.841236  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:41.878527  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:41.924650  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:42.035789  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:42.342468  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:42.380731  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:42.426156  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:42.534471  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:42.785912  595895 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.011289133s)
	W0929 11:31:42.785977  595895 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:31:42.786005  595895 retry.go:31] will retry after 983.289132ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:31:42.842132  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:42.879170  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:42.924415  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:43.036251  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:43.343664  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:43.382521  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:43.423598  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:43.534301  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:43.770317  595895 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 11:31:43.843700  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:43.880339  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:43.925260  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:44.035702  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:44.342152  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:44.380186  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:44.427570  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:44.537930  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:44.812756  595895 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.042397237s)
	W0929 11:31:44.812812  595895 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:31:44.812836  595895 retry.go:31] will retry after 2.137947671s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:31:44.843045  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:44.881899  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:44.924762  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:45.035718  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:45.343550  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:45.378897  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:45.424866  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:45.534338  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:45.841433  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:45.877671  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:45.923645  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:46.034379  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:46.372337  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:46.406356  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:46.426866  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:46.534032  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:46.842343  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:46.879578  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:46.925175  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:46.951146  595895 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 11:31:47.034343  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:47.344240  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:47.382773  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:47.424668  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:47.540037  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:47.843427  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:47.879391  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:47.924262  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:47.960092  595895 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.008893629s)
	W0929 11:31:47.960177  595895 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:31:47.960206  595895 retry.go:31] will retry after 2.504757299s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:31:48.033591  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:48.341481  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:48.378697  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:48.424514  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:48.536592  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:48.842185  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:48.879742  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:48.923614  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:49.034098  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:49.340781  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:49.379506  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:49.423231  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:49.534207  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:49.842436  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:49.877896  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:49.924231  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:50.034614  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:50.341556  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:50.379007  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:50.423685  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:50.465827  595895 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 11:31:50.536792  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:50.843824  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:50.879454  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:50.924711  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:51.035609  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:51.343958  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:51.379841  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:51.424239  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:51.468054  595895 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.002171892s)
	W0929 11:31:51.468114  595895 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:31:51.468140  595895 retry.go:31] will retry after 5.613548218s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:31:51.533585  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:51.963029  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:51.963886  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:51.964026  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:52.060713  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:52.343223  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:52.378836  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:52.424767  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:52.534427  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:52.849585  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:52.879670  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:52.948684  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:53.048366  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:53.346453  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:53.380741  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:53.426760  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:53.533978  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:53.840987  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:53.879766  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:53.924223  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:54.035753  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:54.342742  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:54.378763  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:54.423439  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:54.535260  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:54.841859  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:54.880183  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:54.925299  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:55.033854  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:55.340853  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:55.378822  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:55.424172  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:55.534313  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:55.842189  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:55.879647  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:55.925521  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:56.034145  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:56.341524  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:56.384803  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:56.424070  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:56.533658  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:56.845007  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:56.881917  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:56.944166  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:57.044730  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:57.082647  595895 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 11:31:57.345840  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:57.379131  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:57.425387  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:57.534328  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:57.843711  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:57.879327  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:57.925624  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:58.038058  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:58.345139  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:58.379479  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:58.427479  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:58.431242  595895 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.348544969s)
	W0929 11:31:58.431293  595895 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:31:58.431314  595895 retry.go:31] will retry after 5.599503168s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:31:58.535825  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:58.841717  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:58.878293  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:58.926559  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:59.035878  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:59.341486  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:59.381532  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:59.425077  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:31:59.532752  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:31:59.841172  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:31:59.878180  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:31:59.923096  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:00.034481  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:00.557941  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:00.559858  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:32:00.559963  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:00.560670  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:00.841990  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:00.879357  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:32:00.926097  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:01.036394  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:01.344642  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:01.379875  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:32:01.425784  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:01.534466  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:01.842499  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:01.878243  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:32:01.924047  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:02.033958  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:02.342377  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:02.380154  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:32:02.423813  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:02.535090  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:02.843862  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:02.879556  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:32:02.924521  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:03.034759  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:03.340099  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:03.378625  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:32:03.423534  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:03.534511  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:03.841201  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:03.878471  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:32:03.924393  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:04.031608  595895 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 11:32:04.037031  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:04.344499  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:04.378709  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:32:04.426297  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:04.536239  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:04.842255  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:04.878783  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:32:04.925876  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:05.037628  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:05.250099  595895 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.218439403s)
	W0929 11:32:05.250163  595895 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:32:05.250186  595895 retry.go:31] will retry after 6.3969875s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:32:05.342875  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:05.380683  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:32:05.424490  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:05.534483  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:05.841804  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:05.880284  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:32:05.923385  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:06.034868  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:06.341952  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:06.378384  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:32:06.426408  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:06.535793  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:06.842154  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:06.880699  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:32:06.924358  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:07.035474  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:07.343686  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:07.378323  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:32:07.423762  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:07.535390  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:07.843851  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:07.881716  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:32:07.927684  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:08.037583  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:08.341340  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:08.380517  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:32:08.424488  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:08.535292  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:08.841002  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:08.879020  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:32:08.924253  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:09.089297  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:09.340800  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:09.377819  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:32:09.423823  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:09.534297  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:09.849243  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:09.950172  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:32:09.950267  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:10.036059  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:10.346922  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:10.379976  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:32:10.424634  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:10.538864  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:10.842015  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:10.879192  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:32:10.925328  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:11.040957  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:11.349029  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:11.380885  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:32:11.452716  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:11.533526  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:11.648223  595895 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 11:32:11.846882  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:11.881994  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:32:11.924898  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:12.037323  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:12.342006  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:12.378476  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:32:12.425404  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:12.544040  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:12.792386  595895 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.144111976s)
	W0929 11:32:12.792447  595895 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:32:12.792475  595895 retry.go:31] will retry after 13.411476283s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:32:12.842021  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:12.880179  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:32:12.924788  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:13.040328  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:13.342434  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:13.378229  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:32:13.423792  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:13.533728  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:13.843276  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:13.881114  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:32:13.924958  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:14.034759  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:14.342679  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:14.391569  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:32:14.496903  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:14.537421  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:14.843175  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:14.880166  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:32:14.923743  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:15.033994  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:15.343313  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:15.378881  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:32:15.423448  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:15.538003  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:15.845026  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:15.879663  595895 kapi.go:107] duration metric: took 42.005359357s to wait for kubernetes.io/minikube-addons=registry ...
	I0929 11:32:15.924537  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:16.034645  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:16.341847  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:16.423671  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:16.542699  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:16.844239  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:16.931285  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:17.038278  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:17.353396  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:17.429078  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:17.543634  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:17.844298  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:17.946425  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:18.041877  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:18.345833  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:18.428431  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:18.540908  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:18.840650  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:18.941953  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:19.044517  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:19.341978  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:19.424948  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:19.534807  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:19.839721  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:19.923994  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:20.033049  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:20.342737  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:20.425291  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:20.540624  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:20.844143  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:20.923381  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:21.034820  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:21.343509  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:21.423753  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:21.533929  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:21.841334  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:21.923232  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:22.035002  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:22.630689  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:22.632895  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:22.632941  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:22.845479  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:22.926876  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:23.038229  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:23.355255  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:23.427225  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:23.538625  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:23.844878  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:23.934777  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:24.035280  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:24.346419  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:24.423729  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:24.534589  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:24.842134  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:24.923902  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:25.034892  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:25.362314  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:25.488458  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:25.587385  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:25.861373  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:25.929934  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:26.034355  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:26.204639  595895 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 11:32:26.361386  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:26.429512  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:26.537022  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:26.843446  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:26.926054  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:27.035634  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:27.344336  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:27.424901  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:27.537642  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:27.644135  595895 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.439429306s)
	W0929 11:32:27.644198  595895 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:32:27.644227  595895 retry.go:31] will retry after 29.327619656s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:32:27.842768  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:27.923415  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:28.034767  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:28.343738  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:28.445503  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:28.546159  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:28.851845  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:28.927009  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:29.033400  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:29.341998  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:29.426197  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:29.537012  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:29.842012  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:29.924188  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:30.034037  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:30.346865  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:30.430853  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:30.542769  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:30.842367  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:30.922904  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:31.033768  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:31.341881  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:31.425338  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:31.535963  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:31.844006  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:31.924398  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:32.034705  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:32.346065  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:32.423672  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:32.534377  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:32.842447  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:32.925931  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:33.034800  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:33.387960  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:33.429171  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:33.546901  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:33.852519  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:33.953288  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:34.035154  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:34.344025  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:34.431259  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:34.536600  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:34.843653  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:34.927609  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:35.036794  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:35.341408  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:35.425312  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:35.541227  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:35.847181  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:35.947699  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:36.035760  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:36.344915  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:36.424144  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:36.535593  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:36.841859  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:36.924975  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:37.037919  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:37.452583  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:37.459370  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:37.537236  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:37.841013  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:37.923280  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:38.036969  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:38.340515  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:38.425769  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:38.549235  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:38.842439  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:38.925062  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:39.035751  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:39.341398  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:39.422778  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:39.534951  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:39.841870  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:39.925988  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:40.034408  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:40.340654  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:40.424350  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:40.535075  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:40.843236  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:40.924921  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:41.034406  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:41.497913  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:41.499293  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:41.535243  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:41.844020  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:41.923065  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:42.045660  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:42.342026  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:42.426493  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:42.535570  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:42.841485  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:42.923010  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:43.039027  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:43.346733  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:43.432195  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:43.540145  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:43.885089  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:43.972714  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:44.068027  595895 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:32:44.345507  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:44.427061  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:44.535862  595895 kapi.go:107] duration metric: took 1m14.00612311s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0929 11:32:44.842493  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:44.929592  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:45.347246  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:45.424028  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:45.841905  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:45.923701  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:46.347078  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:46.425229  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:46.845817  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:46.925006  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:47.341259  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:47.426132  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:47.845143  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:47.924205  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:48.349502  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:32:48.452604  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:48.846442  595895 kapi.go:107] duration metric: took 1m10.509578031s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0929 11:32:48.847867  595895 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-214441 cluster.
	I0929 11:32:48.849227  595895 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0929 11:32:48.850374  595895 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0929 11:32:48.946549  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:49.426824  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:49.927802  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:50.426120  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:50.925871  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:51.426655  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:51.927170  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:52.426213  595895 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:32:52.923791  595895 kapi.go:107] duration metric: took 1m18.504852087s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0929 11:32:56.972597  595895 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W0929 11:32:57.723998  595895 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:32:57.724041  595895 retry.go:31] will retry after 18.741816746s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:33:16.468501  595895 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W0929 11:33:17.218683  595895 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:33:17.218783  595895 main.go:141] libmachine: Making call to close driver server
	I0929 11:33:17.218797  595895 main.go:141] libmachine: (addons-214441) Calling .Close
	I0929 11:33:17.219140  595895 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:33:17.219161  595895 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:33:17.219172  595895 main.go:141] libmachine: Making call to close driver server
	I0929 11:33:17.219180  595895 main.go:141] libmachine: (addons-214441) Calling .Close
	I0929 11:33:17.219203  595895 main.go:141] libmachine: (addons-214441) DBG | Closing plugin on server side
	I0929 11:33:17.219480  595895 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:33:17.219502  595895 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:33:17.219534  595895 main.go:141] libmachine: (addons-214441) DBG | Closing plugin on server side
	W0929 11:33:17.219634  595895 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I0929 11:33:17.221637  595895 out.go:179] * Enabled addons: ingress-dns, storage-provisioner-rancher, storage-provisioner, cloud-spanner, volcano, amd-gpu-device-plugin, metrics-server, registry-creds, nvidia-device-plugin, yakd, default-storageclass, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0929 11:33:17.223007  595895 addons.go:514] duration metric: took 1m59.781528816s for enable addons: enabled=[ingress-dns storage-provisioner-rancher storage-provisioner cloud-spanner volcano amd-gpu-device-plugin metrics-server registry-creds nvidia-device-plugin yakd default-storageclass volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0929 11:33:17.223046  595895 start.go:246] waiting for cluster config update ...
	I0929 11:33:17.223066  595895 start.go:255] writing updated cluster config ...
	I0929 11:33:17.223379  595895 ssh_runner.go:195] Run: rm -f paused
	I0929 11:33:17.229885  595895 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0929 11:33:17.234611  595895 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-fkh52" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 11:33:17.240669  595895 pod_ready.go:94] pod "coredns-66bc5c9577-fkh52" is "Ready"
	I0929 11:33:17.240694  595895 pod_ready.go:86] duration metric: took 6.057488ms for pod "coredns-66bc5c9577-fkh52" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 11:33:17.243134  595895 pod_ready.go:83] waiting for pod "etcd-addons-214441" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 11:33:17.248977  595895 pod_ready.go:94] pod "etcd-addons-214441" is "Ready"
	I0929 11:33:17.249003  595895 pod_ready.go:86] duration metric: took 5.848678ms for pod "etcd-addons-214441" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 11:33:17.251694  595895 pod_ready.go:83] waiting for pod "kube-apiserver-addons-214441" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 11:33:17.257270  595895 pod_ready.go:94] pod "kube-apiserver-addons-214441" is "Ready"
	I0929 11:33:17.257299  595895 pod_ready.go:86] duration metric: took 5.583626ms for pod "kube-apiserver-addons-214441" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 11:33:17.259585  595895 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-214441" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 11:33:17.635253  595895 pod_ready.go:94] pod "kube-controller-manager-addons-214441" is "Ready"
	I0929 11:33:17.635287  595895 pod_ready.go:86] duration metric: took 375.675116ms for pod "kube-controller-manager-addons-214441" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 11:33:17.834921  595895 pod_ready.go:83] waiting for pod "kube-proxy-d9fnb" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 11:33:18.234706  595895 pod_ready.go:94] pod "kube-proxy-d9fnb" is "Ready"
	I0929 11:33:18.234735  595895 pod_ready.go:86] duration metric: took 399.786159ms for pod "kube-proxy-d9fnb" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 11:33:18.435590  595895 pod_ready.go:83] waiting for pod "kube-scheduler-addons-214441" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 11:33:18.834304  595895 pod_ready.go:94] pod "kube-scheduler-addons-214441" is "Ready"
	I0929 11:33:18.834340  595895 pod_ready.go:86] duration metric: took 398.719914ms for pod "kube-scheduler-addons-214441" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 11:33:18.834353  595895 pod_ready.go:40] duration metric: took 1.60442513s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0929 11:33:18.881427  595895 start.go:623] kubectl: 1.34.1, cluster: 1.34.0 (minor skew: 0)
	I0929 11:33:18.883901  595895 out.go:179] * Done! kubectl is now configured to use "addons-214441" cluster and "default" namespace by default
	
	
	==> Docker <==
	Sep 29 11:40:05 addons-214441 dockerd[1525]: time="2025-09-29T11:40:05.960771913Z" level=info msg="ignoring event" container=e7580cc057c8482e4f15c21b50317fb3c99e5d88f3ee9c407f7ddff4f7c9b6e9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 29 11:40:06 addons-214441 dockerd[1525]: time="2025-09-29T11:40:06.250175198Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 11:40:06 addons-214441 cri-dockerd[1389]: time="2025-09-29T11:40:06Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"registry-66898fdd98-d7zx7_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	Sep 29 11:40:06 addons-214441 dockerd[1525]: time="2025-09-29T11:40:06.340568088Z" level=info msg="ignoring event" container=962de4a995d2e4f585d50756761141be3819d075ab3e3e3e4775a6abc009838b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 29 11:40:06 addons-214441 dockerd[1525]: time="2025-09-29T11:40:06.439621334Z" level=info msg="ignoring event" container=9a881a5471f2a299dd7f23bb298226952f503edcf31e1e1def70e7d98ef7cb14 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 29 11:40:08 addons-214441 cri-dockerd[1389]: time="2025-09-29T11:40:08Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/33cb6d6c42ab92f2122c4c39939d2d0aa959f1672e66d392601e14df9fc1a791/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Sep 29 11:40:08 addons-214441 dockerd[1525]: time="2025-09-29T11:40:08.949554459Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 11:40:12 addons-214441 dockerd[1525]: time="2025-09-29T11:40:12.472787208Z" level=info msg="ignoring event" container=5f92d762e43b0d126c794f2602dae338d6dc24c2365869c3059258162959502e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 29 11:40:12 addons-214441 cri-dockerd[1389]: time="2025-09-29T11:40:12Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"nvidia-device-plugin-daemonset-x7b8m_kube-system\": unexpected command output nsenter: cannot open /proc/3594/ns/net: No such file or directory\n with error: exit status 1"
	Sep 29 11:40:12 addons-214441 dockerd[1525]: time="2025-09-29T11:40:12.835736081Z" level=info msg="ignoring event" container=32059c64edb966cb5f7e2340a1e2ceca69e13d023e1ed9eaf2c360f517f8a582 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 29 11:40:13 addons-214441 cri-dockerd[1389]: time="2025-09-29T11:40:13Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/c8b3e1c8b1ffdce105f2d4b1845989f032f14be6ab336366dcc8033cf1a26d29/resolv.conf as [nameserver 10.96.0.10 search local-path-storage.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Sep 29 11:40:14 addons-214441 dockerd[1525]: time="2025-09-29T11:40:14.191835943Z" level=warning msg="reference for unknown type: " digest="sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" remote="docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Sep 29 11:40:14 addons-214441 dockerd[1525]: time="2025-09-29T11:40:14.235148960Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 11:40:24 addons-214441 dockerd[1525]: time="2025-09-29T11:40:24.176052084Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 11:40:27 addons-214441 dockerd[1525]: time="2025-09-29T11:40:27.068605835Z" level=warning msg="reference for unknown type: " digest="sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" remote="docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Sep 29 11:40:27 addons-214441 dockerd[1525]: time="2025-09-29T11:40:27.103547928Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 11:40:34 addons-214441 dockerd[1525]: time="2025-09-29T11:40:34.210704879Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 11:40:52 addons-214441 dockerd[1525]: time="2025-09-29T11:40:52.292597549Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 11:40:52 addons-214441 cri-dockerd[1389]: time="2025-09-29T11:40:52Z" level=info msg="Stop pulling image docker.io/nginx:latest: latest: Pulling from library/nginx"
	Sep 29 11:40:52 addons-214441 dockerd[1525]: time="2025-09-29T11:40:52.320663300Z" level=warning msg="reference for unknown type: " digest="sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" remote="docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Sep 29 11:40:52 addons-214441 dockerd[1525]: time="2025-09-29T11:40:52.363731053Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 11:41:24 addons-214441 dockerd[1525]: time="2025-09-29T11:41:24.170858049Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 11:41:34 addons-214441 dockerd[1525]: time="2025-09-29T11:41:34.169371020Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 11:41:41 addons-214441 dockerd[1525]: time="2025-09-29T11:41:41.068354920Z" level=warning msg="reference for unknown type: " digest="sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" remote="docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Sep 29 11:41:41 addons-214441 dockerd[1525]: time="2025-09-29T11:41:41.109169619Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	8f0982c238973       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                                          2 minutes ago       Running             busybox                                  0                   66bafac6b9afb       busybox
	af544573fc0a7       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          9 minutes ago       Running             csi-snapshotter                          0                   02a7d350b8353       csi-hostpathplugin-8279f
	0ce41bd4faa5b       registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8                          9 minutes ago       Running             csi-provisioner                          0                   02a7d350b8353       csi-hostpathplugin-8279f
	a8b5f59d15a16       registry.k8s.io/sig-storage/livenessprobe@sha256:cacee2b5c36dd59d4c7e8469c05c9e4ef53ecb2df9025fa8c10cdaf61bce62f0                            9 minutes ago       Running             liveness-probe                           0                   02a7d350b8353       csi-hostpathplugin-8279f
	2514173d96a26       registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5                           9 minutes ago       Running             hostpath                                 0                   02a7d350b8353       csi-hostpathplugin-8279f
	9b5cb54a94a47       registry.k8s.io/ingress-nginx/controller@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef                             9 minutes ago       Running             controller                               0                   8b83af6a32772       ingress-nginx-controller-9cc49f96f-h99dj
	ef4f6e22ce31a       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:f1c25991bac2fbb7f5fcf91ed9438df31e30edee6bed5a780464238aa09ad24c                9 minutes ago       Running             node-driver-registrar                    0                   02a7d350b8353       csi-hostpathplugin-8279f
	5810f70edf860       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:80b9ba94aa2afe24553d69bd165a6a51552d1582d68618ec00d3b804a7d9193c   9 minutes ago       Running             csi-external-health-monitor-controller   0                   02a7d350b8353       csi-hostpathplugin-8279f
	51f0c139f4f77       registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7                              9 minutes ago       Running             csi-resizer                              0                   9e3b6780764f8       csi-hostpath-resizer-0
	e02a58717cc7c       registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b                             9 minutes ago       Running             csi-attacher                             0                   00ac4103d1658       csi-hostpath-attacher-0
	e805d753e363a       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      9 minutes ago       Running             volume-snapshot-controller               0                   5ef4f58a4b6da       snapshot-controller-7d9fbc56b8-pw4g9
	868179ee6252a       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      9 minutes ago       Running             volume-snapshot-controller               0                   34844f808604d       snapshot-controller-7d9fbc56b8-wvh2l
	30d73d85a386c       8c217da6734db                                                                                                                                9 minutes ago       Exited              patch                                    1                   63ec050554699       ingress-nginx-admission-patch-tp6tp
	4182ff3d1e473       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24                   9 minutes ago       Exited              create                                   0                   f519da4bfec27       ingress-nginx-admission-create-s6nvq
	220ba84adaccb       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:66fdf18cc8a577423b2a36b96a5be40fe690fdb986bfe7875f54edfa9c7d19a5                            9 minutes ago       Running             gadget                                   0                   95e2903b29637       gadget-xvvvf
	31302c4317135       rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246                                       9 minutes ago       Running             local-path-provisioner                   0                   621898582dfa1       local-path-provisioner-648f6765c9-fq5l2
	48adb1b2452be       kicbase/minikube-ingress-dns@sha256:d7c3fd25a0ea8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89                                         10 minutes ago      Running             minikube-ingress-dns                     0                   3ce8cc04a57f5       kube-ingress-dns-minikube
	e49c7022a687d       gcr.io/cloud-spanner-emulator/emulator@sha256:15030dbba87c4fba50265cc80e62278eb41925d24d3a54c30563eff06304bf58                               10 minutes ago      Running             cloud-spanner-emulator                   0                   6c19c08a0c4b0       cloud-spanner-emulator-85f6b7fc65-vpv4f
	388ea771a1c89       6e38f40d628db                                                                                                                                10 minutes ago      Running             storage-provisioner                      0                   a451536f2a3ae       storage-provisioner
	ef7f4d809a410       rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                                               10 minutes ago      Running             amd-gpu-device-plugin                    0                   efbec0257280a       amd-gpu-device-plugin-7jx7f
	5629c377b6053       52546a367cc9e                                                                                                                                10 minutes ago      Running             coredns                                  0                   b6c342cfbd0e9       coredns-66bc5c9577-fkh52
	cf32cea215063       df0860106674d                                                                                                                                10 minutes ago      Running             kube-proxy                               0                   164bb1f35fdbf       kube-proxy-d9fnb
	1b712309a5901       46169d968e920                                                                                                                                11 minutes ago      Running             kube-scheduler                           0                   16368e958b541       kube-scheduler-addons-214441
	5df8c088591fb       5f1f5298c888d                                                                                                                                11 minutes ago      Running             etcd                                     0                   0a4ad14786721       etcd-addons-214441
	b5368f01fa760       90550c43ad2bc                                                                                                                                11 minutes ago      Running             kube-apiserver                           0                   47b3b468b3308       kube-apiserver-addons-214441
	b7a56dc83eb1d       a0af72f2ec6d6                                                                                                                                11 minutes ago      Running             kube-controller-manager                  0                   8a7efdf44079d       kube-controller-manager-addons-214441
	
	
	==> controller_ingress [9b5cb54a94a4] <==
	I0929 11:32:44.982815       7 status.go:224] "POD is not ready" pod="ingress-nginx/ingress-nginx-controller-9cc49f96f-h99dj" node="addons-214441"
	I0929 11:32:45.020999       7 controller.go:228] "Backend successfully reloaded"
	I0929 11:32:45.021197       7 controller.go:240] "Initial sync, sleeping for 1 second"
	I0929 11:32:45.021384       7 event.go:377] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-9cc49f96f-h99dj", UID:"2bde8bfa-47f0-48da-9e63-cd8e2a0a38c6", APIVersion:"v1", ResourceVersion:"784", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	I0929 11:32:45.037639       7 status.go:224] "POD is not ready" pod="ingress-nginx/ingress-nginx-controller-9cc49f96f-h99dj" node="addons-214441"
	W0929 11:39:51.373839       7 controller.go:1126] Error obtaining Endpoints for Service "default/nginx": no object matching key "default/nginx" in local store
	I0929 11:39:51.377315       7 main.go:107] "successfully validated configuration, accepting" ingress="default/nginx-ingress"
	I0929 11:39:51.383910       7 store.go:443] "Found valid IngressClass" ingress="default/nginx-ingress" ingressclass="nginx"
	W0929 11:39:51.384731       7 controller.go:1126] Error obtaining Endpoints for Service "default/nginx": no object matching key "default/nginx" in local store
	I0929 11:39:51.386972       7 controller.go:214] "Configuration changes detected, backend reload required"
	I0929 11:39:51.388223       7 event.go:377] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"nginx-ingress", UID:"6c60e7a0-fa15-408e-810a-a4af1c88fe08", APIVersion:"networking.k8s.io/v1", ResourceVersion:"2366", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync
	I0929 11:39:51.444940       7 controller.go:228] "Backend successfully reloaded"
	I0929 11:39:51.450504       7 event.go:377] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-9cc49f96f-h99dj", UID:"2bde8bfa-47f0-48da-9e63-cd8e2a0a38c6", APIVersion:"v1", ResourceVersion:"784", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	W0929 11:39:54.719235       7 controller.go:1232] Service "default/nginx" does not have any active Endpoint.
	I0929 11:39:54.719924       7 controller.go:214] "Configuration changes detected, backend reload required"
	I0929 11:39:54.771503       7 controller.go:228] "Backend successfully reloaded"
	I0929 11:39:54.772049       7 event.go:377] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-9cc49f96f-h99dj", UID:"2bde8bfa-47f0-48da-9e63-cd8e2a0a38c6", APIVersion:"v1", ResourceVersion:"784", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	W0929 11:39:58.057011       7 controller.go:1232] Service "default/nginx" does not have any active Endpoint.
	W0929 11:40:01.385065       7 controller.go:1232] Service "default/nginx" does not have any active Endpoint.
	W0929 11:40:04.718802       7 controller.go:1232] Service "default/nginx" does not have any active Endpoint.
	W0929 11:40:08.052750       7 controller.go:1232] Service "default/nginx" does not have any active Endpoint.
	W0929 11:40:11.385651       7 controller.go:1232] Service "default/nginx" does not have any active Endpoint.
	I0929 11:40:44.966647       7 status.go:309] "updating Ingress status" namespace="default" ingress="nginx-ingress" currentValue=null newValue=[{"ip":"192.168.39.76"}]
	I0929 11:40:44.973434       7 event.go:377] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"nginx-ingress", UID:"6c60e7a0-fa15-408e-810a-a4af1c88fe08", APIVersion:"networking.k8s.io/v1", ResourceVersion:"2672", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync
	W0929 11:40:44.974230       7 controller.go:1232] Service "default/nginx" does not have any active Endpoint.
	
	
	==> coredns [5629c377b605] <==
	[INFO] 10.244.0.7:52212 - 14403 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.001145753s
	[INFO] 10.244.0.7:52212 - 34526 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.001027976s
	[INFO] 10.244.0.7:52212 - 40091 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.002958291s
	[INFO] 10.244.0.7:52212 - 8101 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000112715s
	[INFO] 10.244.0.7:52212 - 55833 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000201304s
	[INFO] 10.244.0.7:52212 - 46374 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000813986s
	[INFO] 10.244.0.7:52212 - 13461 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.00014644s
	[INFO] 10.244.0.7:58134 - 57276 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000168682s
	[INFO] 10.244.0.7:58134 - 56902 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000087725s
	[INFO] 10.244.0.7:45806 - 23713 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000124662s
	[INFO] 10.244.0.7:45806 - 23950 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000142715s
	[INFO] 10.244.0.7:42777 - 55128 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000080735s
	[INFO] 10.244.0.7:42777 - 54892 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000216294s
	[INFO] 10.244.0.7:36398 - 14124 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000321419s
	[INFO] 10.244.0.7:36398 - 13929 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000550817s
	[INFO] 10.244.0.26:41550 - 7840 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00065483s
	[INFO] 10.244.0.26:48585 - 52888 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000202217s
	[INFO] 10.244.0.26:53114 - 55168 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000190191s
	[INFO] 10.244.0.26:47096 - 26187 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000662248s
	[INFO] 10.244.0.26:48999 - 38178 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00015298s
	[INFO] 10.244.0.26:58286 - 39587 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000285241s
	[INFO] 10.244.0.26:45238 - 61249 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.003642198s
	[INFO] 10.244.0.26:33573 - 52185 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 534 0.003922074s
	[INFO] 10.244.0.30:45249 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.002086838s
	[INFO] 10.244.0.30:35918 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000164605s
	
	
	==> describe nodes <==
	Name:               addons-214441
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-214441
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b8c533eb3dce8338d3d4e7231ea97d1c44ed6f81
	                    minikube.k8s.io/name=addons-214441
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_29T11_31_12_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-214441
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-214441"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Sep 2025 11:31:09 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-214441
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Sep 2025 11:42:05 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Sep 2025 11:40:14 +0000   Mon, 29 Sep 2025 11:31:06 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Sep 2025 11:40:14 +0000   Mon, 29 Sep 2025 11:31:06 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Sep 2025 11:40:14 +0000   Mon, 29 Sep 2025 11:31:06 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Sep 2025 11:40:14 +0000   Mon, 29 Sep 2025 11:31:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.76
	  Hostname:    addons-214441
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008588Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008588Ki
	  pods:               110
	System Info:
	  Machine ID:                 44179717398847cdb8d861dffe58e059
	  System UUID:                44179717-3988-47cd-b8d8-61dffe58e059
	  Boot ID:                    f083535d-5807-413a-9a6b-1a0bbe2d4432
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://28.4.0
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (23 in total)
	  Namespace                   Name                                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m38s
	  default                     cloud-spanner-emulator-85f6b7fc65-vpv4f                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     nginx                                                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m20s
	  default                     task-pv-pod                                                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m3s
	  gadget                      gadget-xvvvf                                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  ingress-nginx               ingress-nginx-controller-9cc49f96f-h99dj                      100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         10m
	  kube-system                 amd-gpu-device-plugin-7jx7f                                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 coredns-66bc5c9577-fkh52                                      100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     10m
	  kube-system                 csi-hostpath-attacher-0                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 csi-hostpath-resizer-0                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 csi-hostpathplugin-8279f                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 etcd-addons-214441                                            100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         10m
	  kube-system                 kube-apiserver-addons-214441                                  250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-addons-214441                         200m (10%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-ingress-dns-minikube                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-proxy-d9fnb                                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-scheduler-addons-214441                                  100m (5%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 snapshot-controller-7d9fbc56b8-pw4g9                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 snapshot-controller-7d9fbc56b8-wvh2l                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 storage-provisioner                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  local-path-storage          helper-pod-create-pvc-80c6eaed-6b59-4fd5-b78c-ea16539d3681    0 (0%)        0 (0%)      0 (0%)           0 (0%)         118s
	  local-path-storage          local-path-provisioner-648f6765c9-fq5l2                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  yakd-dashboard              yakd-dashboard-5ff678cb9-8b84x                                0 (0%)        0 (0%)      128Mi (3%)       256Mi (6%)     10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             388Mi (9%)  426Mi (10%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 10m   kube-proxy       
	  Normal  Starting                 11m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  10m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  10m   kubelet          Node addons-214441 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m   kubelet          Node addons-214441 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m   kubelet          Node addons-214441 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           10m   node-controller  Node addons-214441 event: Registered Node addons-214441 in Controller
	  Normal  NodeReady                10m   kubelet          Node addons-214441 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.120480] kauditd_printk_skb: 401 callbacks suppressed
	[Sep29 11:31] kauditd_printk_skb: 205 callbacks suppressed
	[  +0.167720] kauditd_printk_skb: 165 callbacks suppressed
	[  +0.127166] kauditd_printk_skb: 19 callbacks suppressed
	[  +0.109876] kauditd_printk_skb: 297 callbacks suppressed
	[  +0.186219] kauditd_printk_skb: 164 callbacks suppressed
	[  +0.000058] kauditd_printk_skb: 275 callbacks suppressed
	[  +1.798616] kauditd_printk_skb: 343 callbacks suppressed
	[ +13.445646] kauditd_printk_skb: 68 callbacks suppressed
	[  +5.142447] kauditd_printk_skb: 20 callbacks suppressed
	[Sep29 11:32] kauditd_printk_skb: 41 callbacks suppressed
	[  +6.199632] kauditd_printk_skb: 38 callbacks suppressed
	[  +1.030429] kauditd_printk_skb: 78 callbacks suppressed
	[  +5.195773] kauditd_printk_skb: 75 callbacks suppressed
	[  +5.274224] kauditd_printk_skb: 150 callbacks suppressed
	[  +5.780886] kauditd_printk_skb: 68 callbacks suppressed
	[  +8.295767] kauditd_printk_skb: 56 callbacks suppressed
	[Sep29 11:39] kauditd_printk_skb: 2 callbacks suppressed
	[ +10.045350] kauditd_printk_skb: 59 callbacks suppressed
	[ +11.893143] kauditd_printk_skb: 41 callbacks suppressed
	[  +5.745446] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.704785] kauditd_printk_skb: 81 callbacks suppressed
	[Sep29 11:40] kauditd_printk_skb: 79 callbacks suppressed
	[  +2.308317] kauditd_printk_skb: 113 callbacks suppressed
	[  +1.203541] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> etcd [5df8c088591f] <==
	{"level":"info","ts":"2025-09-29T11:32:00.549416Z","caller":"traceutil/trace.go:172","msg":"trace[283960959] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1061; }","duration":"215.430561ms","start":"2025-09-29T11:32:00.333975Z","end":"2025-09-29T11:32:00.549406Z","steps":["trace[283960959] 'agreement among raft nodes before linearized reading'  (duration: 214.453965ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-29T11:32:00.549612Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"178.233017ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-29T11:32:00.549630Z","caller":"traceutil/trace.go:172","msg":"trace[1676271402] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1062; }","duration":"178.256779ms","start":"2025-09-29T11:32:00.371368Z","end":"2025-09-29T11:32:00.549625Z","steps":["trace[1676271402] 'agreement among raft nodes before linearized reading'  (duration: 178.210962ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-29T11:32:00.549775Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"133.256178ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-29T11:32:00.549795Z","caller":"traceutil/trace.go:172","msg":"trace[872905781] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1062; }","duration":"133.278789ms","start":"2025-09-29T11:32:00.416510Z","end":"2025-09-29T11:32:00.549789Z","steps":["trace[872905781] 'agreement among raft nodes before linearized reading'  (duration: 133.240765ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-29T11:32:22.619881Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"283.951682ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-29T11:32:22.619953Z","caller":"traceutil/trace.go:172","msg":"trace[256565612] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1138; }","duration":"284.054314ms","start":"2025-09-29T11:32:22.335884Z","end":"2025-09-29T11:32:22.619939Z","steps":["trace[256565612] 'range keys from in-memory index tree'  (duration: 283.898213ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-29T11:32:22.620417Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"203.038923ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-29T11:32:22.620455Z","caller":"traceutil/trace.go:172","msg":"trace[2141218366] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1138; }","duration":"203.079865ms","start":"2025-09-29T11:32:22.417365Z","end":"2025-09-29T11:32:22.620444Z","steps":["trace[2141218366] 'range keys from in-memory index tree'  (duration: 202.851561ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-29T11:32:37.446139Z","caller":"traceutil/trace.go:172","msg":"trace[1518739598] linearizableReadLoop","detail":"{readStateIndex:1281; appliedIndex:1281; }","duration":"111.376689ms","start":"2025-09-29T11:32:37.334743Z","end":"2025-09-29T11:32:37.446120Z","steps":["trace[1518739598] 'read index received'  (duration: 111.370356ms)","trace[1518739598] 'applied index is now lower than readState.Index'  (duration: 5.449µs)"],"step_count":2}
	{"level":"warn","ts":"2025-09-29T11:32:37.446365Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"111.596508ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-29T11:32:37.446409Z","caller":"traceutil/trace.go:172","msg":"trace[333303529] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1250; }","duration":"111.664223ms","start":"2025-09-29T11:32:37.334737Z","end":"2025-09-29T11:32:37.446401Z","steps":["trace[333303529] 'agreement among raft nodes before linearized reading'  (duration: 111.566754ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-29T11:32:37.447956Z","caller":"traceutil/trace.go:172","msg":"trace[1818807407] transaction","detail":"{read_only:false; response_revision:1251; number_of_response:1; }","duration":"216.083326ms","start":"2025-09-29T11:32:37.231864Z","end":"2025-09-29T11:32:37.447947Z","steps":["trace[1818807407] 'process raft request'  (duration: 214.333833ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-29T11:32:41.490882Z","caller":"traceutil/trace.go:172","msg":"trace[1943079177] linearizableReadLoop","detail":"{readStateIndex:1295; appliedIndex:1295; }","duration":"156.252408ms","start":"2025-09-29T11:32:41.334599Z","end":"2025-09-29T11:32:41.490852Z","steps":["trace[1943079177] 'read index received'  (duration: 156.245254ms)","trace[1943079177] 'applied index is now lower than readState.Index'  (duration: 4.49µs)"],"step_count":2}
	{"level":"warn","ts":"2025-09-29T11:32:41.491088Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"156.469181ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-29T11:32:41.491110Z","caller":"traceutil/trace.go:172","msg":"trace[366978766] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1264; }","duration":"156.509563ms","start":"2025-09-29T11:32:41.334595Z","end":"2025-09-29T11:32:41.491105Z","steps":["trace[366978766] 'agreement among raft nodes before linearized reading'  (duration: 156.436502ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-29T11:32:41.491567Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-09-29T11:32:41.150207Z","time spent":"341.358415ms","remote":"127.0.0.1:41482","response type":"/etcdserverpb.Lease/LeaseGrant","request count":-1,"request size":-1,"response count":-1,"response size":-1,"request content":""}
	{"level":"info","ts":"2025-09-29T11:39:57.948345Z","caller":"traceutil/trace.go:172","msg":"trace[1591406496] linearizableReadLoop","detail":"{readStateIndex:2551; appliedIndex:2551; }","duration":"124.72426ms","start":"2025-09-29T11:39:57.823478Z","end":"2025-09-29T11:39:57.948202Z","steps":["trace[1591406496] 'read index received'  (duration: 124.71863ms)","trace[1591406496] 'applied index is now lower than readState.Index'  (duration: 4.802µs)"],"step_count":2}
	{"level":"warn","ts":"2025-09-29T11:39:57.948549Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"125.025613ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-29T11:39:57.948597Z","caller":"traceutil/trace.go:172","msg":"trace[612703964] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:2421; }","duration":"125.116152ms","start":"2025-09-29T11:39:57.823474Z","end":"2025-09-29T11:39:57.948590Z","steps":["trace[612703964] 'agreement among raft nodes before linearized reading'  (duration: 124.997233ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-29T11:39:57.949437Z","caller":"traceutil/trace.go:172","msg":"trace[1306847484] transaction","detail":"{read_only:false; number_of_response:1; response_revision:2422; }","duration":"296.693601ms","start":"2025-09-29T11:39:57.652733Z","end":"2025-09-29T11:39:57.949427Z","steps":["trace[1306847484] 'process raft request'  (duration: 296.121623ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-29T11:39:58.302377Z","caller":"traceutil/trace.go:172","msg":"trace[126438438] transaction","detail":"{read_only:false; response_revision:2433; number_of_response:1; }","duration":"116.690338ms","start":"2025-09-29T11:39:58.185669Z","end":"2025-09-29T11:39:58.302359Z","steps":["trace[126438438] 'process raft request'  (duration: 107.946386ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-29T11:41:07.514630Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1800}
	{"level":"info","ts":"2025-09-29T11:41:07.635361Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1800,"took":"119.419717ms","hash":3783191704,"current-db-size-bytes":8732672,"current-db-size":"8.7 MB","current-db-size-in-use-bytes":5963776,"current-db-size-in-use":"6.0 MB"}
	{"level":"info","ts":"2025-09-29T11:41:07.635428Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":3783191704,"revision":1800,"compact-revision":-1}
	
	
	==> kernel <==
	 11:42:11 up 11 min,  0 users,  load average: 0.38, 0.79, 0.67
	Linux addons-214441 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Sep 18 15:48:18 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [b5368f01fa76] <==
	I0929 11:39:23.259925       1 handler.go:285] Adding GroupVersion nodeinfo.volcano.sh v1alpha1 to ResourceManager
	I0929 11:39:23.308951       1 handler.go:285] Adding GroupVersion topology.volcano.sh v1alpha1 to ResourceManager
	I0929 11:39:23.466423       1 handler.go:285] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	W0929 11:39:23.748690       1 cacher.go:182] Terminating all watchers from cacher commands.bus.volcano.sh
	I0929 11:39:23.776062       1 handler.go:285] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	I0929 11:39:23.839444       1 handler.go:285] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	I0929 11:39:24.054959       1 handler.go:285] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	W0929 11:39:24.460545       1 cacher.go:182] Terminating all watchers from cacher jobs.batch.volcano.sh
	W0929 11:39:24.467415       1 cacher.go:182] Terminating all watchers from cacher podgroups.scheduling.volcano.sh
	W0929 11:39:24.500846       1 cacher.go:182] Terminating all watchers from cacher queues.scheduling.volcano.sh
	W0929 11:39:24.516151       1 cacher.go:182] Terminating all watchers from cacher hypernodes.topology.volcano.sh
	W0929 11:39:24.580645       1 cacher.go:182] Terminating all watchers from cacher numatopologies.nodeinfo.volcano.sh
	W0929 11:39:25.117972       1 cacher.go:182] Terminating all watchers from cacher jobflows.flow.volcano.sh
	W0929 11:39:25.322421       1 cacher.go:182] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
	E0929 11:39:42.471472       1 conn.go:339] Error on socket receive: read tcp 192.168.39.76:8443->192.168.39.1:44978: use of closed network connection
	E0929 11:39:42.758211       1 conn.go:339] Error on socket receive: read tcp 192.168.39.76:8443->192.168.39.1:45000: use of closed network connection
	I0929 11:39:45.674152       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:39:51.379831       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I0929 11:39:51.635969       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.106.133.174"}
	I0929 11:39:52.039060       1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.99.167.87"}
	I0929 11:40:21.576337       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0929 11:40:21.997121       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:41:04.368312       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:41:09.156786       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0929 11:41:32.070520       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	
	
	==> kube-controller-manager [b7a56dc83eb1] <==
	E0929 11:40:48.552650       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 11:40:49.475732       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 11:40:49.477351       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 11:40:49.609634       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 11:40:49.611229       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 11:41:12.105664       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 11:41:12.106947       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 11:41:13.701467       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 11:41:13.703728       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 11:41:29.458555       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 11:41:29.460086       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 11:41:31.360088       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 11:41:31.361838       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 11:41:40.482033       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 11:41:40.483685       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 11:41:40.609307       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 11:41:40.610612       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 11:41:46.642813       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 11:41:46.644469       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 11:41:48.675404       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 11:41:48.676794       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 11:41:52.466905       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 11:41:52.468215       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 11:42:03.635736       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 11:42:03.637350       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	
	
	==> kube-proxy [cf32cea21506] <==
	I0929 11:31:18.966107       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0929 11:31:19.067553       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0929 11:31:19.067585       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.76"]
	E0929 11:31:19.067663       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0929 11:31:19.367843       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I0929 11:31:19.367925       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0929 11:31:19.367957       1 server_linux.go:132] "Using iptables Proxier"
	I0929 11:31:19.410838       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0929 11:31:19.411105       1 server.go:527] "Version info" version="v1.34.0"
	I0929 11:31:19.411117       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 11:31:19.438109       1 config.go:200] "Starting service config controller"
	I0929 11:31:19.438145       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0929 11:31:19.438165       1 config.go:106] "Starting endpoint slice config controller"
	I0929 11:31:19.438169       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0929 11:31:19.438197       1 config.go:403] "Starting serviceCIDR config controller"
	I0929 11:31:19.438201       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0929 11:31:19.443612       1 config.go:309] "Starting node config controller"
	I0929 11:31:19.443644       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0929 11:31:19.443650       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0929 11:31:19.552512       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0929 11:31:19.552650       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0929 11:31:19.639397       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [1b712309a590] <==
	E0929 11:31:09.221196       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0929 11:31:09.221236       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0929 11:31:09.222033       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E0929 11:31:09.225006       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0929 11:31:09.225514       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E0929 11:31:09.225802       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0929 11:31:09.225865       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0929 11:31:09.225922       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0929 11:31:09.226012       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0929 11:31:09.226045       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0929 11:31:10.048406       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0929 11:31:10.133629       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0929 11:31:10.190360       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0929 11:31:10.277104       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0929 11:31:10.293798       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0929 11:31:10.302970       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0929 11:31:10.326331       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0929 11:31:10.346485       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E0929 11:31:10.373940       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0929 11:31:10.450205       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0929 11:31:10.476705       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0929 11:31:10.548049       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0929 11:31:10.584420       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0929 11:31:10.696768       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I0929 11:31:12.791660       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 29 11:41:24 addons-214441 kubelet[2504]: E0929 11:41:24.176174    2504 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:alpine"
	Sep 29 11:41:24 addons-214441 kubelet[2504]: E0929 11:41:24.176246    2504 kuberuntime_image.go:43] "Failed to pull image" err="Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:alpine"
	Sep 29 11:41:24 addons-214441 kubelet[2504]: E0929 11:41:24.176413    2504 kuberuntime_manager.go:1449] "Unhandled Error" err="container nginx start failed in pod nginx_default(182f1b86-e027-4d79-a5a9-272a05688c3b): ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Sep 29 11:41:24 addons-214441 kubelet[2504]: E0929 11:41:24.176448    2504 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ErrImagePull: \"Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="182f1b86-e027-4d79-a5a9-272a05688c3b"
	Sep 29 11:41:27 addons-214441 kubelet[2504]: E0929 11:41:27.050669    2504 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"helper-pod\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="local-path-storage/helper-pod-create-pvc-80c6eaed-6b59-4fd5-b78c-ea16539d3681" podUID="dd1e5b21-7118-4e7b-ae96-07711f228569"
	Sep 29 11:41:31 addons-214441 kubelet[2504]: E0929 11:41:31.049358    2504 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"yakd\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/marcnuri/yakd:0.0.5@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="yakd-dashboard/yakd-dashboard-5ff678cb9-8b84x" podUID="776cffb2-d8ee-4337-a96e-2a5d06549491"
	Sep 29 11:41:34 addons-214441 kubelet[2504]: E0929 11:41:34.174619    2504 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Sep 29 11:41:34 addons-214441 kubelet[2504]: E0929 11:41:34.174697    2504 kuberuntime_image.go:43] "Failed to pull image" err="Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Sep 29 11:41:34 addons-214441 kubelet[2504]: E0929 11:41:34.174818    2504 kuberuntime_manager.go:1449] "Unhandled Error" err="container task-pv-container start failed in pod task-pv-pod_default(aff7bf59-352b-45d6-9449-f442a6b25e27): ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Sep 29 11:41:34 addons-214441 kubelet[2504]: E0929 11:41:34.174861    2504 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ErrImagePull: \"Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="aff7bf59-352b-45d6-9449-f442a6b25e27"
	Sep 29 11:41:39 addons-214441 kubelet[2504]: E0929 11:41:39.050758    2504 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="182f1b86-e027-4d79-a5a9-272a05688c3b"
	Sep 29 11:41:41 addons-214441 kubelet[2504]: E0929 11:41:41.113223    2504 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Sep 29 11:41:41 addons-214441 kubelet[2504]: E0929 11:41:41.113366    2504 kuberuntime_image.go:43] "Failed to pull image" err="Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Sep 29 11:41:41 addons-214441 kubelet[2504]: E0929 11:41:41.113466    2504 kuberuntime_manager.go:1449] "Unhandled Error" err="container helper-pod start failed in pod helper-pod-create-pvc-80c6eaed-6b59-4fd5-b78c-ea16539d3681_local-path-storage(dd1e5b21-7118-4e7b-ae96-07711f228569): ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Sep 29 11:41:41 addons-214441 kubelet[2504]: E0929 11:41:41.113506    2504 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"helper-pod\" with ErrImagePull: \"Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="local-path-storage/helper-pod-create-pvc-80c6eaed-6b59-4fd5-b78c-ea16539d3681" podUID="dd1e5b21-7118-4e7b-ae96-07711f228569"
	Sep 29 11:41:46 addons-214441 kubelet[2504]: E0929 11:41:46.051128    2504 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"yakd\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/marcnuri/yakd:0.0.5@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="yakd-dashboard/yakd-dashboard-5ff678cb9-8b84x" podUID="776cffb2-d8ee-4337-a96e-2a5d06549491"
	Sep 29 11:41:49 addons-214441 kubelet[2504]: E0929 11:41:49.045646    2504 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="aff7bf59-352b-45d6-9449-f442a6b25e27"
	Sep 29 11:41:52 addons-214441 kubelet[2504]: E0929 11:41:52.056986    2504 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="182f1b86-e027-4d79-a5a9-272a05688c3b"
	Sep 29 11:41:52 addons-214441 kubelet[2504]: E0929 11:41:52.059241    2504 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"helper-pod\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="local-path-storage/helper-pod-create-pvc-80c6eaed-6b59-4fd5-b78c-ea16539d3681" podUID="dd1e5b21-7118-4e7b-ae96-07711f228569"
	Sep 29 11:41:58 addons-214441 kubelet[2504]: E0929 11:41:58.050220    2504 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"yakd\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/marcnuri/yakd:0.0.5@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="yakd-dashboard/yakd-dashboard-5ff678cb9-8b84x" podUID="776cffb2-d8ee-4337-a96e-2a5d06549491"
	Sep 29 11:42:00 addons-214441 kubelet[2504]: E0929 11:42:00.046410    2504 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="aff7bf59-352b-45d6-9449-f442a6b25e27"
	Sep 29 11:42:03 addons-214441 kubelet[2504]: E0929 11:42:03.050064    2504 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="182f1b86-e027-4d79-a5a9-272a05688c3b"
	Sep 29 11:42:05 addons-214441 kubelet[2504]: I0929 11:42:05.046318    2504 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Sep 29 11:42:06 addons-214441 kubelet[2504]: E0929 11:42:06.063521    2504 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"helper-pod\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="local-path-storage/helper-pod-create-pvc-80c6eaed-6b59-4fd5-b78c-ea16539d3681" podUID="dd1e5b21-7118-4e7b-ae96-07711f228569"
	Sep 29 11:42:09 addons-214441 kubelet[2504]: E0929 11:42:09.049538    2504 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"yakd\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/marcnuri/yakd:0.0.5@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="yakd-dashboard/yakd-dashboard-5ff678cb9-8b84x" podUID="776cffb2-d8ee-4337-a96e-2a5d06549491"
	
	
	==> storage-provisioner [388ea771a1c8] <==
	W0929 11:41:47.342776       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:41:49.348332       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:41:49.355115       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:41:51.358533       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:41:51.364234       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:41:53.367488       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:41:53.376082       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:41:55.381139       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:41:55.395050       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:41:57.399512       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:41:57.406617       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:41:59.414158       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:41:59.425345       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:42:01.429662       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:42:01.438132       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:42:03.444404       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:42:03.450177       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:42:05.454106       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:42:05.459722       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:42:07.463365       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:42:07.469217       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:42:09.475676       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:42:09.498705       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:42:11.505603       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:42:11.513131       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-214441 -n addons-214441
helpers_test.go:269: (dbg) Run:  kubectl --context addons-214441 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: nginx task-pv-pod test-local-path ingress-nginx-admission-create-s6nvq ingress-nginx-admission-patch-tp6tp helper-pod-create-pvc-80c6eaed-6b59-4fd5-b78c-ea16539d3681 yakd-dashboard-5ff678cb9-8b84x
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Yakd]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-214441 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-s6nvq ingress-nginx-admission-patch-tp6tp helper-pod-create-pvc-80c6eaed-6b59-4fd5-b78c-ea16539d3681 yakd-dashboard-5ff678cb9-8b84x
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-214441 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-s6nvq ingress-nginx-admission-patch-tp6tp helper-pod-create-pvc-80c6eaed-6b59-4fd5-b78c-ea16539d3681 yakd-dashboard-5ff678cb9-8b84x: exit status 1 (90.965038ms)

                                                
                                                
-- stdout --
	Name:             nginx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-214441/192.168.39.76
	Start Time:       Mon, 29 Sep 2025 11:39:51 +0000
	Labels:           run=nginx
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.28
	IPs:
	  IP:  10.244.0.28
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-rdmgz (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-rdmgz:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  2m21s                default-scheduler  Successfully assigned default/nginx to addons-214441
	  Warning  Failed     2m20s                kubelet            Failed to pull image "docker.io/nginx:alpine": toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    48s (x4 over 2m20s)  kubelet            Pulling image "docker.io/nginx:alpine"
	  Warning  Failed     48s (x4 over 2m20s)  kubelet            Error: ErrImagePull
	  Warning  Failed     48s (x3 over 2m6s)   kubelet            Failed to pull image "docker.io/nginx:alpine": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    9s (x9 over 2m19s)   kubelet            Back-off pulling image "docker.io/nginx:alpine"
	  Warning  Failed     9s (x9 over 2m19s)   kubelet            Error: ImagePullBackOff
	
	
	Name:             task-pv-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-214441/192.168.39.76
	Start Time:       Mon, 29 Sep 2025 11:40:08 +0000
	Labels:           app=task-pv-pod
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.31
	IPs:
	  IP:  10.244.0.31
	Containers:
	  task-pv-container:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           80/TCP (http-server)
	    Host Port:      0/TCP (http-server)
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /usr/share/nginx/html from task-pv-storage (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-kt6ld (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  task-pv-storage:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  hpvc
	    ReadOnly:   false
	  kube-api-access-kt6ld:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                 From               Message
	  ----     ------     ----                ----               -------
	  Normal   Scheduled  2m4s                default-scheduler  Successfully assigned default/task-pv-pod to addons-214441
	  Warning  Failed     80s                 kubelet            Failed to pull image "docker.io/nginx": toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    38s (x4 over 2m4s)  kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     38s (x3 over 2m4s)  kubelet            Failed to pull image "docker.io/nginx": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     38s (x4 over 2m4s)  kubelet            Error: ErrImagePull
	  Normal   BackOff    12s (x6 over 2m3s)  kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     12s (x6 over 2m3s)  kubelet            Error: ImagePullBackOff
	
	
	Name:             test-local-path
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           run=test-local-path
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  busybox:
	    Image:      busybox:stable
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sh
	      -c
	      echo 'local-path-provisioner' > /test/file1
	    Environment:  <none>
	    Mounts:
	      /test from data (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-tffd7 (ro)
	Volumes:
	  data:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  test-pvc
	    ReadOnly:   false
	  kube-api-access-tffd7:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:                      <none>

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-s6nvq" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-tp6tp" not found
	Error from server (NotFound): pods "helper-pod-create-pvc-80c6eaed-6b59-4fd5-b78c-ea16539d3681" not found
	Error from server (NotFound): pods "yakd-dashboard-5ff678cb9-8b84x" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-214441 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-s6nvq ingress-nginx-admission-patch-tp6tp helper-pod-create-pvc-80c6eaed-6b59-4fd5-b78c-ea16539d3681 yakd-dashboard-5ff678cb9-8b84x: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-214441 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-214441 addons disable yakd --alsologtostderr -v=1: (5.768364462s)
--- FAIL: TestAddons/parallel/Yakd (128.26s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (302.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-345567 --alsologtostderr -v=1]
functional_test.go:933: output didn't produce a URL
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-345567 --alsologtostderr -v=1] ...
functional_test.go:925: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-345567 --alsologtostderr -v=1] stdout:
functional_test.go:925: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-345567 --alsologtostderr -v=1] stderr:
I0929 11:52:55.509281  608598 out.go:360] Setting OutFile to fd 1 ...
I0929 11:52:55.509651  608598 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0929 11:52:55.509666  608598 out.go:374] Setting ErrFile to fd 2...
I0929 11:52:55.509674  608598 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0929 11:52:55.510053  608598 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21654-591397/.minikube/bin
I0929 11:52:55.510497  608598 mustload.go:65] Loading cluster: functional-345567
I0929 11:52:55.511019  608598 config.go:182] Loaded profile config "functional-345567": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.0
I0929 11:52:55.511596  608598 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0929 11:52:55.511677  608598 main.go:141] libmachine: Launching plugin server for driver kvm2
I0929 11:52:55.531128  608598 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39657
I0929 11:52:55.531850  608598 main.go:141] libmachine: () Calling .GetVersion
I0929 11:52:55.532700  608598 main.go:141] libmachine: Using API Version  1
I0929 11:52:55.532733  608598 main.go:141] libmachine: () Calling .SetConfigRaw
I0929 11:52:55.533313  608598 main.go:141] libmachine: () Calling .GetMachineName
I0929 11:52:55.533631  608598 main.go:141] libmachine: (functional-345567) Calling .GetState
I0929 11:52:55.537182  608598 host.go:66] Checking if "functional-345567" exists ...
I0929 11:52:55.537634  608598 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0929 11:52:55.537689  608598 main.go:141] libmachine: Launching plugin server for driver kvm2
I0929 11:52:55.554026  608598 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38725
I0929 11:52:55.554602  608598 main.go:141] libmachine: () Calling .GetVersion
I0929 11:52:55.555193  608598 main.go:141] libmachine: Using API Version  1
I0929 11:52:55.555224  608598 main.go:141] libmachine: () Calling .SetConfigRaw
I0929 11:52:55.555601  608598 main.go:141] libmachine: () Calling .GetMachineName
I0929 11:52:55.555777  608598 main.go:141] libmachine: (functional-345567) Calling .DriverName
I0929 11:52:55.555951  608598 api_server.go:166] Checking apiserver status ...
I0929 11:52:55.556019  608598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0929 11:52:55.556042  608598 main.go:141] libmachine: (functional-345567) Calling .GetSSHHostname
I0929 11:52:55.561138  608598 main.go:141] libmachine: (functional-345567) DBG | domain functional-345567 has defined MAC address 52:54:00:ee:f4:1f in network mk-functional-345567
I0929 11:52:55.561538  608598 main.go:141] libmachine: (functional-345567) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:f4:1f", ip: ""} in network mk-functional-345567: {Iface:virbr1 ExpiryTime:2025-09-29 12:49:37 +0000 UTC Type:0 Mac:52:54:00:ee:f4:1f Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:functional-345567 Clientid:01:52:54:00:ee:f4:1f}
I0929 11:52:55.561563  608598 main.go:141] libmachine: (functional-345567) DBG | domain functional-345567 has defined IP address 192.168.39.165 and MAC address 52:54:00:ee:f4:1f in network mk-functional-345567
I0929 11:52:55.561827  608598 main.go:141] libmachine: (functional-345567) Calling .GetSSHPort
I0929 11:52:55.562096  608598 main.go:141] libmachine: (functional-345567) Calling .GetSSHKeyPath
I0929 11:52:55.562289  608598 main.go:141] libmachine: (functional-345567) Calling .GetSSHUsername
I0929 11:52:55.562501  608598 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21654-591397/.minikube/machines/functional-345567/id_rsa Username:docker}
I0929 11:52:55.689985  608598 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/11823/cgroup
W0929 11:52:55.705381  608598 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/11823/cgroup: Process exited with status 1
stdout:

                                                
                                                
stderr:
I0929 11:52:55.705459  608598 ssh_runner.go:195] Run: ls
I0929 11:52:55.712285  608598 api_server.go:253] Checking apiserver healthz at https://192.168.39.165:8441/healthz ...
I0929 11:52:55.719467  608598 api_server.go:279] https://192.168.39.165:8441/healthz returned 200:
ok
W0929 11:52:55.719525  608598 out.go:285] * Enabling dashboard ...
* Enabling dashboard ...
I0929 11:52:55.719738  608598 config.go:182] Loaded profile config "functional-345567": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.0
I0929 11:52:55.719766  608598 addons.go:69] Setting dashboard=true in profile "functional-345567"
I0929 11:52:55.719780  608598 addons.go:238] Setting addon dashboard=true in "functional-345567"
I0929 11:52:55.719816  608598 host.go:66] Checking if "functional-345567" exists ...
I0929 11:52:55.720233  608598 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0929 11:52:55.720285  608598 main.go:141] libmachine: Launching plugin server for driver kvm2
I0929 11:52:55.736532  608598 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45861
I0929 11:52:55.737147  608598 main.go:141] libmachine: () Calling .GetVersion
I0929 11:52:55.737719  608598 main.go:141] libmachine: Using API Version  1
I0929 11:52:55.737748  608598 main.go:141] libmachine: () Calling .SetConfigRaw
I0929 11:52:55.738175  608598 main.go:141] libmachine: () Calling .GetMachineName
I0929 11:52:55.738843  608598 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0929 11:52:55.738893  608598 main.go:141] libmachine: Launching plugin server for driver kvm2
I0929 11:52:55.755200  608598 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37441
I0929 11:52:55.755748  608598 main.go:141] libmachine: () Calling .GetVersion
I0929 11:52:55.756269  608598 main.go:141] libmachine: Using API Version  1
I0929 11:52:55.756292  608598 main.go:141] libmachine: () Calling .SetConfigRaw
I0929 11:52:55.756623  608598 main.go:141] libmachine: () Calling .GetMachineName
I0929 11:52:55.756835  608598 main.go:141] libmachine: (functional-345567) Calling .GetState
I0929 11:52:55.758826  608598 main.go:141] libmachine: (functional-345567) Calling .DriverName
I0929 11:52:55.761248  608598 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
I0929 11:52:55.762692  608598 out.go:179]   - Using image docker.io/kubernetesui/metrics-scraper:v1.0.8
I0929 11:52:55.763920  608598 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
I0929 11:52:55.763939  608598 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I0929 11:52:55.763965  608598 main.go:141] libmachine: (functional-345567) Calling .GetSSHHostname
I0929 11:52:55.767459  608598 main.go:141] libmachine: (functional-345567) DBG | domain functional-345567 has defined MAC address 52:54:00:ee:f4:1f in network mk-functional-345567
I0929 11:52:55.768153  608598 main.go:141] libmachine: (functional-345567) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:f4:1f", ip: ""} in network mk-functional-345567: {Iface:virbr1 ExpiryTime:2025-09-29 12:49:37 +0000 UTC Type:0 Mac:52:54:00:ee:f4:1f Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:functional-345567 Clientid:01:52:54:00:ee:f4:1f}
I0929 11:52:55.768187  608598 main.go:141] libmachine: (functional-345567) DBG | domain functional-345567 has defined IP address 192.168.39.165 and MAC address 52:54:00:ee:f4:1f in network mk-functional-345567
I0929 11:52:55.768443  608598 main.go:141] libmachine: (functional-345567) Calling .GetSSHPort
I0929 11:52:55.768672  608598 main.go:141] libmachine: (functional-345567) Calling .GetSSHKeyPath
I0929 11:52:55.768859  608598 main.go:141] libmachine: (functional-345567) Calling .GetSSHUsername
I0929 11:52:55.769047  608598 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21654-591397/.minikube/machines/functional-345567/id_rsa Username:docker}
I0929 11:52:55.886602  608598 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I0929 11:52:55.886665  608598 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I0929 11:52:55.912387  608598 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I0929 11:52:55.912421  608598 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I0929 11:52:55.946067  608598 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I0929 11:52:55.946133  608598 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
I0929 11:52:55.980211  608598 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
I0929 11:52:55.980240  608598 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4288 bytes)
I0929 11:52:56.013852  608598 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
I0929 11:52:56.013885  608598 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
I0929 11:52:56.041402  608598 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I0929 11:52:56.041436  608598 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
I0929 11:52:56.080186  608598 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
I0929 11:52:56.080222  608598 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
I0929 11:52:56.110073  608598 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
I0929 11:52:56.110117  608598 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
I0929 11:52:56.138535  608598 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
I0929 11:52:56.138566  608598 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
I0929 11:52:56.192824  608598 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I0929 11:52:57.372140  608598 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.179260076s)
I0929 11:52:57.372207  608598 main.go:141] libmachine: Making call to close driver server
I0929 11:52:57.372227  608598 main.go:141] libmachine: (functional-345567) Calling .Close
I0929 11:52:57.372579  608598 main.go:141] libmachine: (functional-345567) DBG | Closing plugin on server side
I0929 11:52:57.372584  608598 main.go:141] libmachine: Successfully made call to close driver server
I0929 11:52:57.372657  608598 main.go:141] libmachine: Making call to close connection to plugin binary
I0929 11:52:57.372678  608598 main.go:141] libmachine: Making call to close driver server
I0929 11:52:57.372689  608598 main.go:141] libmachine: (functional-345567) Calling .Close
I0929 11:52:57.372961  608598 main.go:141] libmachine: Successfully made call to close driver server
I0929 11:52:57.372978  608598 main.go:141] libmachine: Making call to close connection to plugin binary
I0929 11:52:57.375351  608598 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:

                                                
                                                
	minikube -p functional-345567 addons enable metrics-server

                                                
                                                
I0929 11:52:57.377245  608598 addons.go:201] Writing out "functional-345567" config to set dashboard=true...
W0929 11:52:57.377606  608598 out.go:285] * Verifying dashboard health ...
* Verifying dashboard health ...
I0929 11:52:57.378340  608598 kapi.go:59] client config for functional-345567: &rest.Config{Host:"https://192.168.39.165:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21654-591397/.minikube/profiles/functional-345567/client.crt", KeyFile:"/home/jenkins/minikube-integration/21654-591397/.minikube/profiles/functional-345567/client.key", CAFile:"/home/jenkins/minikube-integration/21654-591397/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil)
, NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f41c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0929 11:52:57.378869  608598 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
I0929 11:52:57.378894  608598 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
I0929 11:52:57.378899  608598 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
I0929 11:52:57.378903  608598 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
I0929 11:52:57.378909  608598 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
I0929 11:52:57.391857  608598 service.go:215] Found service: &Service{ObjectMeta:{kubernetes-dashboard  kubernetes-dashboard  c2a20da6-3717-4fea-896c-6f1df9a0069a 881 0 2025-09-29 11:52:57 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:Reconcile k8s-app:kubernetes-dashboard kubernetes.io/minikube-addons:dashboard] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"addonmanager.kubernetes.io/mode":"Reconcile","k8s-app":"kubernetes-dashboard","kubernetes.io/minikube-addons":"dashboard"},"name":"kubernetes-dashboard","namespace":"kubernetes-dashboard"},"spec":{"ports":[{"port":80,"targetPort":9090}],"selector":{"k8s-app":"kubernetes-dashboard"}}}
] [] [] [{kubectl-client-side-apply Update v1 2025-09-29 11:52:57 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{},"f:k8s-app":{},"f:kubernetes.io/minikube-addons":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":80,\"protocol\":\"TCP\"}":{".":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}} }]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:,Protocol:TCP,Port:80,TargetPort:{0 9090 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: kubernetes-dashboard,},ClusterIP:10.105.24.8,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.105.24.8],IPFamilies:[IPv4],AllocateLoadBalancerNod
ePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}
W0929 11:52:57.392057  608598 out.go:285] * Launching proxy ...
* Launching proxy ...
I0929 11:52:57.392162  608598 dashboard.go:152] Executing: /usr/local/bin/kubectl [/usr/local/bin/kubectl --context functional-345567 proxy --port 36195]
I0929 11:52:57.392478  608598 dashboard.go:157] Waiting for kubectl to output host:port ...
I0929 11:52:57.440055  608598 dashboard.go:175] proxy stdout: Starting to serve on 127.0.0.1:36195
W0929 11:52:57.440150  608598 out.go:285] * Verifying proxy health ...
* Verifying proxy health ...
I0929 11:52:57.451424  608598 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[c94c783d-a36d-4a2d-ae7d-77557f161595] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 11:52:57 GMT]] Body:0xc0008100c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00171e000 TLS:<nil>}
I0929 11:52:57.451542  608598 retry.go:31] will retry after 50.483µs: Temporary Error: unexpected response code: 503
I0929 11:52:57.456349  608598 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[6c3c0885-2baf-42a6-aa02-59f3e9b0d2c3] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 11:52:57 GMT]] Body:0xc00075d480 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00171e140 TLS:<nil>}
I0929 11:52:57.456410  608598 retry.go:31] will retry after 114.527µs: Temporary Error: unexpected response code: 503
I0929 11:52:57.460324  608598 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[c179a467-494e-4afb-99a9-c8fb99880178] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 11:52:57 GMT]] Body:0xc000b8ae80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00053a8c0 TLS:<nil>}
I0929 11:52:57.460397  608598 retry.go:31] will retry after 291.16µs: Temporary Error: unexpected response code: 503
I0929 11:52:57.464250  608598 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[f766f1ae-67ca-43dc-b986-9f3a55cfe721] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 11:52:57 GMT]] Body:0xc00075d580 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002928c0 TLS:<nil>}
I0929 11:52:57.464306  608598 retry.go:31] will retry after 351.628µs: Temporary Error: unexpected response code: 503
I0929 11:52:57.468221  608598 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[ec94fec9-07a7-45cc-ae51-dbd375f22ec3] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 11:52:57 GMT]] Body:0xc0008101c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00053aa00 TLS:<nil>}
I0929 11:52:57.468273  608598 retry.go:31] will retry after 417.551µs: Temporary Error: unexpected response code: 503
I0929 11:52:57.472010  608598 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[c4fe9be9-1d05-49ec-97e6-6c9dba9a58e3] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 11:52:57 GMT]] Body:0xc00075d6c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00171e280 TLS:<nil>}
I0929 11:52:57.472069  608598 retry.go:31] will retry after 810.378µs: Temporary Error: unexpected response code: 503
I0929 11:52:57.476009  608598 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[8e5fc820-6870-4f89-b513-88ca4fb3b4c4] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 11:52:57 GMT]] Body:0xc00075d780 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00053ab40 TLS:<nil>}
I0929 11:52:57.476065  608598 retry.go:31] will retry after 1.317775ms: Temporary Error: unexpected response code: 503
I0929 11:52:57.480960  608598 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[7a000e44-d10f-4922-9602-0d039ce23234] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 11:52:57 GMT]] Body:0xc0008102c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00053ac80 TLS:<nil>}
I0929 11:52:57.481013  608598 retry.go:31] will retry after 2.119518ms: Temporary Error: unexpected response code: 503
I0929 11:52:57.488700  608598 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[79bcf09f-66ef-4551-a1c1-80f08a5f9fee] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 11:52:57 GMT]] Body:0xc000b8b000 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00171e3c0 TLS:<nil>}
I0929 11:52:57.488762  608598 retry.go:31] will retry after 1.444949ms: Temporary Error: unexpected response code: 503
I0929 11:52:57.493802  608598 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[6bfd84a3-b2c0-44da-8428-f95a33ca2c47] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 11:52:57 GMT]] Body:0xc00075d840 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000292a00 TLS:<nil>}
I0929 11:52:57.493896  608598 retry.go:31] will retry after 5.084048ms: Temporary Error: unexpected response code: 503
I0929 11:52:57.502472  608598 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[0eea4915-3f09-4fb7-9c29-3df11f1af964] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 11:52:57 GMT]] Body:0xc000b8b100 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00053af00 TLS:<nil>}
I0929 11:52:57.502543  608598 retry.go:31] will retry after 5.259323ms: Temporary Error: unexpected response code: 503
I0929 11:52:57.511730  608598 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[ca26e66d-7fb8-421f-8258-4e1955a8dd5d] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 11:52:57 GMT]] Body:0xc00075d940 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000292dc0 TLS:<nil>}
I0929 11:52:57.511791  608598 retry.go:31] will retry after 7.828633ms: Temporary Error: unexpected response code: 503
I0929 11:52:57.524241  608598 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[9b75f800-224e-4d3a-a70d-2599b07592c6] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 11:52:57 GMT]] Body:0xc0008103c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00053b040 TLS:<nil>}
I0929 11:52:57.524311  608598 retry.go:31] will retry after 18.782965ms: Temporary Error: unexpected response code: 503
I0929 11:52:57.555204  608598 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[46830e5f-4f47-4026-91c6-c2aeef7e4729] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 11:52:57 GMT]] Body:0xc00075da40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00171e500 TLS:<nil>}
I0929 11:52:57.555296  608598 retry.go:31] will retry after 13.707977ms: Temporary Error: unexpected response code: 503
I0929 11:52:57.581183  608598 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[85ceca54-1051-4e85-a287-68b3db452c1f] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 11:52:57 GMT]] Body:0xc0008104c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00053b180 TLS:<nil>}
I0929 11:52:57.581255  608598 retry.go:31] will retry after 22.456566ms: Temporary Error: unexpected response code: 503
I0929 11:52:57.615148  608598 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[c6362e65-63c2-4737-addc-54d9062cd205] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 11:52:57 GMT]] Body:0xc00075db40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00171e640 TLS:<nil>}
I0929 11:52:57.615249  608598 retry.go:31] will retry after 65.308154ms: Temporary Error: unexpected response code: 503
I0929 11:52:57.686802  608598 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[f4ceaf4f-8c93-4b71-9f9b-053e4567eec6] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 11:52:57 GMT]] Body:0xc000b8b200 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00171e780 TLS:<nil>}
I0929 11:52:57.686901  608598 retry.go:31] will retry after 98.280872ms: Temporary Error: unexpected response code: 503
I0929 11:52:57.791042  608598 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[b9cb179c-9043-4a29-ac68-e859c2eb1ae2] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 11:52:57 GMT]] Body:0xc00075dbc0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000292f00 TLS:<nil>}
I0929 11:52:57.791141  608598 retry.go:31] will retry after 125.089996ms: Temporary Error: unexpected response code: 503
I0929 11:52:57.922816  608598 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[912e98ae-669c-4a08-8a8c-452efbde86af] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 11:52:57 GMT]] Body:0xc000b8b300 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00053b2c0 TLS:<nil>}
I0929 11:52:57.922882  608598 retry.go:31] will retry after 76.08025ms: Temporary Error: unexpected response code: 503
I0929 11:52:58.005436  608598 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[50816b55-cc76-4865-8a84-1264733b0b18] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 11:52:57 GMT]] Body:0xc000810640 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000293040 TLS:<nil>}
I0929 11:52:58.005515  608598 retry.go:31] will retry after 241.68739ms: Temporary Error: unexpected response code: 503
I0929 11:52:58.257361  608598 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[56144811-05f6-4cc9-a50e-9a87b113b3cc] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 11:52:58 GMT]] Body:0xc000810740 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00171e8c0 TLS:<nil>}
I0929 11:52:58.257439  608598 retry.go:31] will retry after 323.955992ms: Temporary Error: unexpected response code: 503
I0929 11:52:58.585814  608598 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[7fae318f-f626-494b-b856-f06b2825caf4] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 11:52:58 GMT]] Body:0xc000810800 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00171ea00 TLS:<nil>}
I0929 11:52:58.585894  608598 retry.go:31] will retry after 253.151378ms: Temporary Error: unexpected response code: 503
I0929 11:52:58.842802  608598 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[07e7cbee-85d6-487a-8d3e-b5f0728884a6] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 11:52:58 GMT]] Body:0xc000b8b400 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00171eb40 TLS:<nil>}
I0929 11:52:58.842859  608598 retry.go:31] will retry after 828.798452ms: Temporary Error: unexpected response code: 503
I0929 11:52:59.675454  608598 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[a34567b0-c461-489a-b141-656c70102c20] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 11:52:59 GMT]] Body:0xc00075dd40 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000293180 TLS:<nil>}
I0929 11:52:59.675533  608598 retry.go:31] will retry after 647.240458ms: Temporary Error: unexpected response code: 503
I0929 11:53:00.328970  608598 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[866f3f76-e42f-42bd-8597-ce5835dbeefb] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 11:53:00 GMT]] Body:0xc000810900 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00053b400 TLS:<nil>}
I0929 11:53:00.329036  608598 retry.go:31] will retry after 1.629721813s: Temporary Error: unexpected response code: 503
I0929 11:53:01.964617  608598 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[1aab99b3-ae87-4f31-aaf8-2f12257a6897] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 11:53:01 GMT]] Body:0xc00075de40 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002932c0 TLS:<nil>}
I0929 11:53:01.964701  608598 retry.go:31] will retry after 2.886364725s: Temporary Error: unexpected response code: 503
I0929 11:53:04.856673  608598 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[d482b424-c1f6-40c2-a7d7-0fa18b097f5e] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 11:53:04 GMT]] Body:0xc000810980 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00053b540 TLS:<nil>}
I0929 11:53:04.856740  608598 retry.go:31] will retry after 2.07082832s: Temporary Error: unexpected response code: 503
I0929 11:53:06.933334  608598 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[f8d5545a-a429-4978-ba40-47dbe3ac63e4] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 11:53:06 GMT]] Body:0xc000878040 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000293e00 TLS:<nil>}
I0929 11:53:06.933418  608598 retry.go:31] will retry after 5.301200154s: Temporary Error: unexpected response code: 503
I0929 11:53:12.239884  608598 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[48d324ad-6343-4ad2-99c9-c1a61a9d76ed] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 11:53:12 GMT]] Body:0xc000810a00 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00053b900 TLS:<nil>}
I0929 11:53:12.240008  608598 retry.go:31] will retry after 10.030283214s: Temporary Error: unexpected response code: 503
I0929 11:53:22.275393  608598 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[db002514-a3e3-4828-a8fb-c679191ef87f] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 11:53:22 GMT]] Body:0xc0015fe000 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00171ec80 TLS:<nil>}
I0929 11:53:22.275459  608598 retry.go:31] will retry after 13.673947758s: Temporary Error: unexpected response code: 503
I0929 11:53:35.953857  608598 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[90f94f34-0397-4071-aac1-b169fb8c0344] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 11:53:35 GMT]] Body:0xc0015fe080 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00162e000 TLS:<nil>}
I0929 11:53:35.953964  608598 retry.go:31] will retry after 21.826847817s: Temporary Error: unexpected response code: 503
I0929 11:53:57.785271  608598 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[6fc07086-10ac-4274-9f33-1e43fc577c67] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 11:53:57 GMT]] Body:0xc00176e000 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00053ba40 TLS:<nil>}
I0929 11:53:57.785350  608598 retry.go:31] will retry after 33.892361732s: Temporary Error: unexpected response code: 503
I0929 11:54:31.682759  608598 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[371c524a-b83e-4b44-868f-069f121b888e] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 11:54:31 GMT]] Body:0xc000878480 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00053bb80 TLS:<nil>}
I0929 11:54:31.682843  608598 retry.go:31] will retry after 26.033624351s: Temporary Error: unexpected response code: 503
I0929 11:54:57.725582  608598 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[0319453a-a2ee-4f43-8c04-10b08de19828] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 11:54:57 GMT]] Body:0xc0008d81c0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0004183c0 TLS:<nil>}
I0929 11:54:57.725662  608598 retry.go:31] will retry after 1m22.496336099s: Temporary Error: unexpected response code: 503
I0929 11:56:20.228825  608598 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[c0446787-41bd-4e42-b29e-485881082d96] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 11:56:20 GMT]] Body:0xc0008d8340 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000418500 TLS:<nil>}
I0929 11:56:20.228912  608598 retry.go:31] will retry after 37.20941324s: Temporary Error: unexpected response code: 503
I0929 11:56:57.442044  608598 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[f625cb0a-36bf-430a-9f2a-793490905dd5] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 11:56:57 GMT]] Body:0xc0015fe040 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000418640 TLS:<nil>}
I0929 11:56:57.442144  608598 retry.go:31] will retry after 53.690640503s: Temporary Error: unexpected response code: 503
I0929 11:57:51.137176  608598 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[387a9d3f-56bd-4246-b28c-bea188ffe711] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 11:57:51 GMT]] Body:0xc0008d8200 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000418780 TLS:<nil>}
I0929 11:57:51.137282  608598 retry.go:31] will retry after 45.092903493s: Temporary Error: unexpected response code: 503
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-345567 -n functional-345567
helpers_test.go:252: <<< TestFunctional/parallel/DashboardCmd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-345567 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-345567 logs -n 25: (1.186809693s)
helpers_test.go:260: TestFunctional/parallel/DashboardCmd logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                          ARGS                                                                           │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-345567 ssh findmnt -T /mount2                                                                                                                │ functional-345567 │ jenkins │ v1.37.0 │ 29 Sep 25 11:52 UTC │ 29 Sep 25 11:52 UTC │
	│ ssh            │ functional-345567 ssh findmnt -T /mount3                                                                                                                │ functional-345567 │ jenkins │ v1.37.0 │ 29 Sep 25 11:52 UTC │ 29 Sep 25 11:52 UTC │
	│ mount          │ -p functional-345567 --kill=true                                                                                                                        │ functional-345567 │ jenkins │ v1.37.0 │ 29 Sep 25 11:52 UTC │                     │
	│ image          │ functional-345567 image load --daemon kicbase/echo-server:functional-345567 --alsologtostderr                                                           │ functional-345567 │ jenkins │ v1.37.0 │ 29 Sep 25 11:52 UTC │ 29 Sep 25 11:53 UTC │
	│ image          │ functional-345567 image ls                                                                                                                              │ functional-345567 │ jenkins │ v1.37.0 │ 29 Sep 25 11:53 UTC │ 29 Sep 25 11:53 UTC │
	│ image          │ functional-345567 image load --daemon kicbase/echo-server:functional-345567 --alsologtostderr                                                           │ functional-345567 │ jenkins │ v1.37.0 │ 29 Sep 25 11:53 UTC │ 29 Sep 25 11:53 UTC │
	│ image          │ functional-345567 image ls                                                                                                                              │ functional-345567 │ jenkins │ v1.37.0 │ 29 Sep 25 11:53 UTC │ 29 Sep 25 11:53 UTC │
	│ image          │ functional-345567 image load --daemon kicbase/echo-server:functional-345567 --alsologtostderr                                                           │ functional-345567 │ jenkins │ v1.37.0 │ 29 Sep 25 11:53 UTC │ 29 Sep 25 11:53 UTC │
	│ image          │ functional-345567 image ls                                                                                                                              │ functional-345567 │ jenkins │ v1.37.0 │ 29 Sep 25 11:53 UTC │ 29 Sep 25 11:53 UTC │
	│ image          │ functional-345567 image save kicbase/echo-server:functional-345567 /home/jenkins/workspace/KVM_Linux_integration/echo-server-save.tar --alsologtostderr │ functional-345567 │ jenkins │ v1.37.0 │ 29 Sep 25 11:53 UTC │ 29 Sep 25 11:53 UTC │
	│ image          │ functional-345567 image rm kicbase/echo-server:functional-345567 --alsologtostderr                                                                      │ functional-345567 │ jenkins │ v1.37.0 │ 29 Sep 25 11:53 UTC │ 29 Sep 25 11:53 UTC │
	│ image          │ functional-345567 image ls                                                                                                                              │ functional-345567 │ jenkins │ v1.37.0 │ 29 Sep 25 11:53 UTC │ 29 Sep 25 11:53 UTC │
	│ image          │ functional-345567 image load /home/jenkins/workspace/KVM_Linux_integration/echo-server-save.tar --alsologtostderr                                       │ functional-345567 │ jenkins │ v1.37.0 │ 29 Sep 25 11:53 UTC │ 29 Sep 25 11:53 UTC │
	│ image          │ functional-345567 image ls                                                                                                                              │ functional-345567 │ jenkins │ v1.37.0 │ 29 Sep 25 11:53 UTC │ 29 Sep 25 11:53 UTC │
	│ image          │ functional-345567 image save --daemon kicbase/echo-server:functional-345567 --alsologtostderr                                                           │ functional-345567 │ jenkins │ v1.37.0 │ 29 Sep 25 11:53 UTC │ 29 Sep 25 11:53 UTC │
	│ update-context │ functional-345567 update-context --alsologtostderr -v=2                                                                                                 │ functional-345567 │ jenkins │ v1.37.0 │ 29 Sep 25 11:53 UTC │ 29 Sep 25 11:53 UTC │
	│ update-context │ functional-345567 update-context --alsologtostderr -v=2                                                                                                 │ functional-345567 │ jenkins │ v1.37.0 │ 29 Sep 25 11:53 UTC │ 29 Sep 25 11:53 UTC │
	│ update-context │ functional-345567 update-context --alsologtostderr -v=2                                                                                                 │ functional-345567 │ jenkins │ v1.37.0 │ 29 Sep 25 11:53 UTC │ 29 Sep 25 11:53 UTC │
	│ image          │ functional-345567 image ls --format short --alsologtostderr                                                                                             │ functional-345567 │ jenkins │ v1.37.0 │ 29 Sep 25 11:53 UTC │ 29 Sep 25 11:53 UTC │
	│ image          │ functional-345567 image ls --format yaml --alsologtostderr                                                                                              │ functional-345567 │ jenkins │ v1.37.0 │ 29 Sep 25 11:53 UTC │ 29 Sep 25 11:53 UTC │
	│ ssh            │ functional-345567 ssh pgrep buildkitd                                                                                                                   │ functional-345567 │ jenkins │ v1.37.0 │ 29 Sep 25 11:53 UTC │                     │
	│ image          │ functional-345567 image build -t localhost/my-image:functional-345567 testdata/build --alsologtostderr                                                  │ functional-345567 │ jenkins │ v1.37.0 │ 29 Sep 25 11:53 UTC │ 29 Sep 25 11:53 UTC │
	│ image          │ functional-345567 image ls                                                                                                                              │ functional-345567 │ jenkins │ v1.37.0 │ 29 Sep 25 11:53 UTC │ 29 Sep 25 11:53 UTC │
	│ image          │ functional-345567 image ls --format json --alsologtostderr                                                                                              │ functional-345567 │ jenkins │ v1.37.0 │ 29 Sep 25 11:53 UTC │ 29 Sep 25 11:53 UTC │
	│ image          │ functional-345567 image ls --format table --alsologtostderr                                                                                             │ functional-345567 │ jenkins │ v1.37.0 │ 29 Sep 25 11:53 UTC │ 29 Sep 25 11:53 UTC │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/29 11:52:54
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0929 11:52:54.886007  608412 out.go:360] Setting OutFile to fd 1 ...
	I0929 11:52:54.886275  608412 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 11:52:54.886285  608412 out.go:374] Setting ErrFile to fd 2...
	I0929 11:52:54.886290  608412 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 11:52:54.886575  608412 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21654-591397/.minikube/bin
	I0929 11:52:54.887080  608412 out.go:368] Setting JSON to false
	I0929 11:52:54.888152  608412 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":5723,"bootTime":1759141052,"procs":254,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0929 11:52:54.888257  608412 start.go:140] virtualization: kvm guest
	I0929 11:52:54.890356  608412 out.go:179] * [functional-345567] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I0929 11:52:54.891776  608412 notify.go:220] Checking for updates...
	I0929 11:52:54.891846  608412 out.go:179]   - MINIKUBE_LOCATION=21654
	I0929 11:52:54.893445  608412 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0929 11:52:54.894736  608412 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21654-591397/kubeconfig
	I0929 11:52:54.896027  608412 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21654-591397/.minikube
	I0929 11:52:54.897194  608412 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0929 11:52:54.898527  608412 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0929 11:52:54.901462  608412 config.go:182] Loaded profile config "functional-345567": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0929 11:52:54.902092  608412 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:52:54.902190  608412 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:52:54.918838  608412 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34729
	I0929 11:52:54.919337  608412 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:52:54.919911  608412 main.go:141] libmachine: Using API Version  1
	I0929 11:52:54.919942  608412 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:52:54.920387  608412 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:52:54.920611  608412 main.go:141] libmachine: (functional-345567) Calling .DriverName
	I0929 11:52:54.920900  608412 driver.go:421] Setting default libvirt URI to qemu:///system
	I0929 11:52:54.921299  608412 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:52:54.921348  608412 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:52:54.936850  608412 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39265
	I0929 11:52:54.937516  608412 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:52:54.938257  608412 main.go:141] libmachine: Using API Version  1
	I0929 11:52:54.938293  608412 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:52:54.938784  608412 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:52:54.939026  608412 main.go:141] libmachine: (functional-345567) Calling .DriverName
	I0929 11:52:54.980510  608412 out.go:179] * Utilisation du pilote kvm2 basé sur le profil existant
	I0929 11:52:54.981656  608412 start.go:304] selected driver: kvm2
	I0929 11:52:54.981676  608412 start.go:924] validating driver "kvm2" against &{Name:functional-345567 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 Clu
sterName:functional-345567 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.165 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVe
rsion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 11:52:54.981806  608412 start.go:935] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0929 11:52:54.984075  608412 out.go:203] 
	W0929 11:52:54.986131  608412 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0929 11:52:54.987384  608412 out.go:203] 
	
	
	==> Docker <==
	Sep 29 11:53:09 functional-345567 dockerd[8452]: time="2025-09-29T11:53:09.841474104Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 11:53:11 functional-345567 dockerd[8452]: time="2025-09-29T11:53:11.738074728Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Sep 29 11:53:11 functional-345567 dockerd[8452]: time="2025-09-29T11:53:11.773900002Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 11:53:12 functional-345567 dockerd[8452]: time="2025-09-29T11:53:12.852767491Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 11:53:14 functional-345567 dockerd[8452]: time="2025-09-29T11:53:14.731135083Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Sep 29 11:53:14 functional-345567 dockerd[8452]: time="2025-09-29T11:53:14.767657466Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 11:53:37 functional-345567 dockerd[8452]: time="2025-09-29T11:53:37.753645150Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Sep 29 11:53:37 functional-345567 dockerd[8452]: time="2025-09-29T11:53:37.803489297Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 11:53:37 functional-345567 dockerd[8452]: time="2025-09-29T11:53:37.828468447Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Sep 29 11:53:37 functional-345567 dockerd[8452]: time="2025-09-29T11:53:37.866527290Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 11:53:38 functional-345567 dockerd[8452]: time="2025-09-29T11:53:38.831836323Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 11:53:38 functional-345567 dockerd[8452]: time="2025-09-29T11:53:38.940708907Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 11:54:18 functional-345567 dockerd[8452]: time="2025-09-29T11:54:18.737064767Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Sep 29 11:54:18 functional-345567 dockerd[8452]: time="2025-09-29T11:54:18.847775644Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 11:54:18 functional-345567 cri-dockerd[9434]: time="2025-09-29T11:54:18Z" level=info msg="Stop pulling image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: Pulling from kubernetesui/dashboard"
	Sep 29 11:54:29 functional-345567 dockerd[8452]: time="2025-09-29T11:54:29.742977658Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Sep 29 11:54:29 functional-345567 dockerd[8452]: time="2025-09-29T11:54:29.789718835Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 11:54:30 functional-345567 dockerd[8452]: time="2025-09-29T11:54:30.812677558Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 11:54:31 functional-345567 dockerd[8452]: time="2025-09-29T11:54:31.810597682Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 11:55:50 functional-345567 dockerd[8452]: time="2025-09-29T11:55:50.739945495Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Sep 29 11:55:50 functional-345567 dockerd[8452]: time="2025-09-29T11:55:50.784699281Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 11:55:55 functional-345567 dockerd[8452]: time="2025-09-29T11:55:55.734875184Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Sep 29 11:55:55 functional-345567 dockerd[8452]: time="2025-09-29T11:55:55.777484806Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 11:56:01 functional-345567 dockerd[8452]: time="2025-09-29T11:56:01.838836743Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 11:56:04 functional-345567 dockerd[8452]: time="2025-09-29T11:56:04.829637369Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	ce94dd62b125d       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   5 minutes ago       Exited              mount-munger              0                   8d7f0bfdf9cfb       busybox-mount
	4be86a79d09d0       kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6           5 minutes ago       Running             echo-server               0                   947dcb252dd05       hello-node-connect-7d85dfc575-lrm8c
	71dea8f862b81       kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6           5 minutes ago       Running             echo-server               0                   72daa0283c9ef       hello-node-75c85bcc94-xr87t
	50a0da838737e       52546a367cc9e                                                                                         5 minutes ago       Running             coredns                   5                   4049429ce4236       coredns-66bc5c9577-xk7nm
	411825215d27e       6e38f40d628db                                                                                         5 minutes ago       Running             storage-provisioner       4                   563027a986f3d       storage-provisioner
	b32b19fdb12c8       df0860106674d                                                                                         5 minutes ago       Running             kube-proxy                4                   22518e2355969       kube-proxy-2fqpd
	90204288ee92a       90550c43ad2bc                                                                                         5 minutes ago       Running             kube-apiserver            0                   23505abfd486b       kube-apiserver-functional-345567
	ea4a881719e9b       a0af72f2ec6d6                                                                                         5 minutes ago       Running             kube-controller-manager   4                   6f92ee8d98831       kube-controller-manager-functional-345567
	2722204aec368       52546a367cc9e                                                                                         5 minutes ago       Running             coredns                   3                   c741d758f049d       coredns-66bc5c9577-mjdq6
	3e5e6adba4ebb       5f1f5298c888d                                                                                         5 minutes ago       Running             etcd                      3                   afdcfcc8dc192       etcd-functional-345567
	bd144e0b1825e       46169d968e920                                                                                         5 minutes ago       Running             kube-scheduler            4                   e9341f8a976df       kube-scheduler-functional-345567
	d71ede638e4d2       52546a367cc9e                                                                                         5 minutes ago       Exited              coredns                   4                   30b0133e782e5       coredns-66bc5c9577-xk7nm
	74103287cd23b       46169d968e920                                                                                         5 minutes ago       Exited              kube-scheduler            3                   4ec944c1a51c9       kube-scheduler-functional-345567
	7d3198e132f24       a0af72f2ec6d6                                                                                         5 minutes ago       Exited              kube-controller-manager   3                   86172d5e4ed87       kube-controller-manager-functional-345567
	4441c251624ac       6e38f40d628db                                                                                         5 minutes ago       Exited              storage-provisioner       3                   8deae7d11ce0d       storage-provisioner
	d17e345f5764f       df0860106674d                                                                                         5 minutes ago       Exited              kube-proxy                3                   9c5baa8d8ef07       kube-proxy-2fqpd
	ed7eee2023740       52546a367cc9e                                                                                         6 minutes ago       Exited              coredns                   2                   a5aae7bb8491c       coredns-66bc5c9577-mjdq6
	976b2c11ea333       5f1f5298c888d                                                                                         6 minutes ago       Exited              etcd                      2                   ceb47995d56fe       etcd-functional-345567
	
	
	==> coredns [2722204aec36] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:58746 - 55120 "HINFO IN 7801123286633978322.7662679228127234237. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.036452724s
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [50a0da838737] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:38522 - 50589 "HINFO IN 7299539350853405645.7723234432700575792. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.020792926s
	
	
	==> coredns [d71ede638e4d] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:53869 - 12162 "HINFO IN 3342720793649580752.1000981730392323068. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.417393383s
	
	
	==> coredns [ed7eee202374] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:36287 - 17292 "HINFO IN 3402359101510948574.979122807022581316. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.023865412s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-345567
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-345567
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b8c533eb3dce8338d3d4e7231ea97d1c44ed6f81
	                    minikube.k8s.io/name=functional-345567
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_29T11_50_06_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Sep 2025 11:50:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-345567
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Sep 2025 11:57:49 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Sep 2025 11:53:23 +0000   Mon, 29 Sep 2025 11:49:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Sep 2025 11:53:23 +0000   Mon, 29 Sep 2025 11:49:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Sep 2025 11:53:23 +0000   Mon, 29 Sep 2025 11:49:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Sep 2025 11:53:23 +0000   Mon, 29 Sep 2025 11:50:11 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.165
	  Hostname:    functional-345567
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008596Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008596Ki
	  pods:               110
	System Info:
	  Machine ID:                 8559d0fdeb664bce82856171ffe07f7f
	  System UUID:                8559d0fd-eb66-4bce-8285-6171ffe07f7f
	  Boot ID:                    fc84bad7-00d6-47c0-8939-3febc52a0433
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://28.4.0
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-xr87t                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m11s
	  default                     hello-node-connect-7d85dfc575-lrm8c           0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m11s
	  default                     mysql-5bb876957f-drk25                        600m (30%)    700m (35%)  512Mi (13%)      700Mi (17%)    5m
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m3s
	  kube-system                 coredns-66bc5c9577-mjdq6                      100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     7m46s
	  kube-system                 coredns-66bc5c9577-xk7nm                      100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     7m46s
	  kube-system                 etcd-functional-345567                        100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         7m51s
	  kube-system                 kube-apiserver-functional-345567              250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m33s
	  kube-system                 kube-controller-manager-functional-345567     200m (10%)    0 (0%)      0 (0%)           0 (0%)         7m51s
	  kube-system                 kube-proxy-2fqpd                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m46s
	  kube-system                 kube-scheduler-functional-345567              100m (5%)     0 (0%)      0 (0%)           0 (0%)         7m51s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m45s
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-ltcz6    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m59s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-jjzsx         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m59s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1450m (72%)  700m (35%)
	  memory             752Mi (19%)  1040Mi (26%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 7m44s                  kube-proxy       
	  Normal  Starting                 5m31s                  kube-proxy       
	  Normal  Starting                 6m37s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  7m58s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    7m58s (x8 over 7m58s)  kubelet          Node functional-345567 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m58s (x7 over 7m58s)  kubelet          Node functional-345567 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  7m58s (x8 over 7m58s)  kubelet          Node functional-345567 status is now: NodeHasSufficientMemory
	  Normal  Starting                 7m51s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  7m51s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m51s                  kubelet          Node functional-345567 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m51s                  kubelet          Node functional-345567 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m51s                  kubelet          Node functional-345567 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           7m47s                  node-controller  Node functional-345567 event: Registered Node functional-345567 in Controller
	  Normal  NodeReady                7m45s                  kubelet          Node functional-345567 status is now: NodeReady
	  Normal  Starting                 6m43s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m43s (x8 over 6m43s)  kubelet          Node functional-345567 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m43s (x8 over 6m43s)  kubelet          Node functional-345567 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m43s (x7 over 6m43s)  kubelet          Node functional-345567 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m43s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           6m36s                  node-controller  Node functional-345567 event: Registered Node functional-345567 in Controller
	  Normal  Starting                 5m37s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m37s (x8 over 5m37s)  kubelet          Node functional-345567 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m37s (x8 over 5m37s)  kubelet          Node functional-345567 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m37s (x7 over 5m37s)  kubelet          Node functional-345567 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m37s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m30s                  node-controller  Node functional-345567 event: Registered Node functional-345567 in Controller
	
	
	==> dmesg <==
	[  +0.108237] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.116386] kauditd_printk_skb: 373 callbacks suppressed
	[  +0.102475] kauditd_printk_skb: 205 callbacks suppressed
	[Sep29 11:50] kauditd_printk_skb: 165 callbacks suppressed
	[  +1.045902] kauditd_printk_skb: 19 callbacks suppressed
	[  +6.979507] kauditd_printk_skb: 270 callbacks suppressed
	[  +0.189723] kauditd_printk_skb: 2 callbacks suppressed
	[ +20.191182] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.446606] kauditd_printk_skb: 22 callbacks suppressed
	[Sep29 11:51] kauditd_printk_skb: 515 callbacks suppressed
	[  +0.000045] kauditd_printk_skb: 106 callbacks suppressed
	[  +4.857046] kauditd_printk_skb: 111 callbacks suppressed
	[  +7.559965] kauditd_printk_skb: 98 callbacks suppressed
	[ +15.196663] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.479035] kauditd_printk_skb: 22 callbacks suppressed
	[Sep29 11:52] kauditd_printk_skb: 470 callbacks suppressed
	[  +0.000025] kauditd_printk_skb: 178 callbacks suppressed
	[  +4.255392] kauditd_printk_skb: 66 callbacks suppressed
	[  +6.794943] kauditd_printk_skb: 84 callbacks suppressed
	[  +4.437328] kauditd_printk_skb: 2 callbacks suppressed
	[  +1.623177] kauditd_printk_skb: 91 callbacks suppressed
	[  +1.873208] kauditd_printk_skb: 146 callbacks suppressed
	[  +2.698217] kauditd_printk_skb: 79 callbacks suppressed
	[Sep29 11:53] crun[14706]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set
	[  +0.000093] kauditd_printk_skb: 104 callbacks suppressed
	
	
	==> etcd [3e5e6adba4eb] <==
	{"level":"warn","ts":"2025-09-29T11:52:21.797724Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55150","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:52:21.818110Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55166","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:52:21.834131Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55194","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:52:21.847879Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55210","server-name":"","error":"read tcp 127.0.0.1:2379->127.0.0.1:55210: read: connection reset by peer"}
	{"level":"warn","ts":"2025-09-29T11:52:21.870351Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55228","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:52:21.881434Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55246","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:52:21.896048Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55264","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:52:21.918341Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55290","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:52:21.930053Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55308","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:52:21.941528Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55352","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:52:21.952839Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55354","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:52:21.972455Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55382","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:52:21.985991Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55398","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:52:22.009906Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55408","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:52:22.016610Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55416","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:52:22.027497Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55430","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:52:22.039261Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55434","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:52:22.062312Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55454","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:52:22.072788Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55458","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:52:22.100425Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55466","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:52:22.135296Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55490","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:52:22.147547Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55508","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:52:22.169177Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55530","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:52:22.180777Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55546","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:52:22.247522Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55566","server-name":"","error":"EOF"}
	
	
	==> etcd [976b2c11ea33] <==
	{"level":"warn","ts":"2025-09-29T11:51:16.188492Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43472","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:51:16.199134Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43484","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:51:16.226491Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43504","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:51:16.235921Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43516","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:51:16.247338Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43532","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:51:16.254037Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43556","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:51:16.305662Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43568","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-29T11:51:53.117016Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-09-29T11:51:53.117110Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-345567","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.165:2380"],"advertise-client-urls":["https://192.168.39.165:2379"]}
	{"level":"error","ts":"2025-09-29T11:51:53.117211Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-09-29T11:51:53.119444Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-09-29T11:52:00.125423Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-29T11:52:00.125528Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"ffc3b7517aaad9f6","current-leader-member-id":"ffc3b7517aaad9f6"}
	{"level":"info","ts":"2025-09-29T11:52:00.127672Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-09-29T11:52:00.127714Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-09-29T11:52:00.129571Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-29T11:52:00.129658Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-09-29T11:52:00.129668Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-09-29T11:52:00.129704Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.165:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-29T11:52:00.130178Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.165:2379: use of closed network connection"}
	{"level":"error","ts":"2025-09-29T11:52:00.130318Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.165:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-29T11:52:00.133561Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.39.165:2380"}
	{"level":"error","ts":"2025-09-29T11:52:00.133635Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.165:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-29T11:52:00.133811Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.39.165:2380"}
	{"level":"info","ts":"2025-09-29T11:52:00.133909Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-345567","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.165:2380"],"advertise-client-urls":["https://192.168.39.165:2379"]}
	
	
	==> kernel <==
	 11:57:56 up 8 min,  0 users,  load average: 0.20, 0.58, 0.40
	Linux functional-345567 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Sep 18 15:48:18 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [90204288ee92] <==
	E0929 11:52:23.062685       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0929 11:52:23.657021       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I0929 11:52:23.851417       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0929 11:52:25.081692       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I0929 11:52:25.175039       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I0929 11:52:25.228450       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0929 11:52:25.242820       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0929 11:52:26.499423       1 controller.go:667] quota admission added evaluator for: endpoints
	I0929 11:52:26.551063       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0929 11:52:26.648056       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0929 11:52:40.519895       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.96.248.96"}
	I0929 11:52:45.183111       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.111.97.77"}
	I0929 11:52:46.101049       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.102.121.36"}
	I0929 11:52:56.142766       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.99.90.196"}
	I0929 11:52:56.718813       1 controller.go:667] quota admission added evaluator for: namespaces
	I0929 11:52:57.321973       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.105.24.8"}
	I0929 11:52:57.354836       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.100.5.50"}
	I0929 11:53:29.416192       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:53:51.223561       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:54:45.491383       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:55:09.464467       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:55:54.861137       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:56:17.689709       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:57:17.511932       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:57:27.615807       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	
	
	==> kube-controller-manager [7d3198e132f2] <==
	I0929 11:52:08.174197       1 serving.go:386] Generated self-signed cert in-memory
	
	
	==> kube-controller-manager [ea4a881719e9] <==
	I0929 11:52:26.249165       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-345567"
	I0929 11:52:26.249423       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I0929 11:52:26.245940       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I0929 11:52:26.250150       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0929 11:52:26.250635       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I0929 11:52:26.252617       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I0929 11:52:26.258588       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I0929 11:52:26.260291       1 shared_informer.go:356] "Caches are synced" controller="job"
	I0929 11:52:26.267491       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I0929 11:52:26.271425       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I0929 11:52:26.271617       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I0929 11:52:26.323832       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0929 11:52:26.345201       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0929 11:52:26.345283       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0929 11:52:26.345291       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	E0929 11:52:56.979172       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0929 11:52:56.991484       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0929 11:52:57.014127       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0929 11:52:57.015058       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0929 11:52:57.035545       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0929 11:52:57.047483       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0929 11:52:57.072110       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0929 11:52:57.073964       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0929 11:52:57.094822       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0929 11:52:57.095756       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-proxy [b32b19fdb12c] <==
	I0929 11:52:24.634935       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0929 11:52:24.735066       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0929 11:52:24.735134       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.165"]
	E0929 11:52:24.735202       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0929 11:52:24.865013       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I0929 11:52:24.865063       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0929 11:52:24.865086       1 server_linux.go:132] "Using iptables Proxier"
	I0929 11:52:24.938859       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0929 11:52:24.939791       1 server.go:527] "Version info" version="v1.34.0"
	I0929 11:52:24.939807       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 11:52:24.951143       1 config.go:200] "Starting service config controller"
	I0929 11:52:24.951508       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0929 11:52:24.951756       1 config.go:106] "Starting endpoint slice config controller"
	I0929 11:52:24.951874       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0929 11:52:24.952014       1 config.go:403] "Starting serviceCIDR config controller"
	I0929 11:52:24.952140       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0929 11:52:24.955349       1 config.go:309] "Starting node config controller"
	I0929 11:52:24.955671       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0929 11:52:24.955939       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0929 11:52:25.051884       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0929 11:52:25.056127       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0929 11:52:25.056143       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [d17e345f5764] <==
	I0929 11:52:06.589363       1 server_linux.go:53] "Using iptables proxy"
	I0929 11:52:06.688192       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	E0929 11:52:06.689895       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-345567&limit=500&resourceVersion=0\": dial tcp 192.168.39.165:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0929 11:52:08.135725       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-345567&limit=500&resourceVersion=0\": dial tcp 192.168.39.165:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	
	
	==> kube-scheduler [74103287cd23] <==
	I0929 11:52:08.685017       1 serving.go:386] Generated self-signed cert in-memory
	
	
	==> kube-scheduler [bd144e0b1825] <==
	E0929 11:52:15.295690       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: Get \"https://192.168.39.165:8441/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.39.165:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0929 11:52:15.396950       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: Get \"https://192.168.39.165:8441/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.39.165:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0929 11:52:15.504843       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: Get \"https://192.168.39.165:8441/apis/resource.k8s.io/v1/resourceslices?limit=500&resourceVersion=0\": dial tcp 192.168.39.165:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E0929 11:52:15.650564       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: Get \"https://192.168.39.165:8441/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.39.165:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0929 11:52:15.686797       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.39.165:8441/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.39.165:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0929 11:52:18.377371       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: Get \"https://192.168.39.165:8441/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.39.165:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0929 11:52:18.816578       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: Get \"https://192.168.39.165:8441/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.39.165:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0929 11:52:19.067097       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: Get \"https://192.168.39.165:8441/apis/storage.k8s.io/v1/volumeattachments?limit=500&resourceVersion=0\": dial tcp 192.168.39.165:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0929 11:52:19.370328       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: Get \"https://192.168.39.165:8441/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.39.165:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0929 11:52:19.519200       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.39.165:8441/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.39.165:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0929 11:52:19.683466       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: Get \"https://192.168.39.165:8441/apis/resource.k8s.io/v1/deviceclasses?limit=500&resourceVersion=0\": dial tcp 192.168.39.165:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E0929 11:52:19.946451       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: Get \"https://192.168.39.165:8441/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.39.165:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0929 11:52:19.952321       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: Get \"https://192.168.39.165:8441/apis/resource.k8s.io/v1/resourceclaims?limit=500&resourceVersion=0\": dial tcp 192.168.39.165:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E0929 11:52:20.058766       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: Get \"https://192.168.39.165:8441/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.39.165:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0929 11:52:20.136061       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.39.165:8441/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.39.165:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0929 11:52:20.510461       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.39.165:8441/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.39.165:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0929 11:52:22.918741       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E0929 11:52:22.918756       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0929 11:52:22.918912       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0929 11:52:22.919195       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0929 11:52:22.920003       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0929 11:52:22.921510       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0929 11:52:22.923086       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E0929 11:52:22.923604       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	I0929 11:52:30.132408       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 29 11:56:36 functional-345567 kubelet[11641]: E0929 11:56:36.707767   11641 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="bf7fbbe7-176a-4572-843e-5c7514e63c62"
	Sep 29 11:56:36 functional-345567 kubelet[11641]: E0929 11:56:36.710819   11641 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-ltcz6" podUID="48ff1da7-2b29-4e1a-a131-015577223249"
	Sep 29 11:56:44 functional-345567 kubelet[11641]: E0929 11:56:44.709731   11641 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-drk25" podUID="ca84faec-7fe2-411c-964f-571eda825801"
	Sep 29 11:56:45 functional-345567 kubelet[11641]: E0929 11:56:45.719853   11641 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-jjzsx" podUID="5644d278-59ff-422e-baa9-b14a5238ef8f"
	Sep 29 11:56:49 functional-345567 kubelet[11641]: E0929 11:56:49.708642   11641 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="bf7fbbe7-176a-4572-843e-5c7514e63c62"
	Sep 29 11:56:51 functional-345567 kubelet[11641]: E0929 11:56:51.711811   11641 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-ltcz6" podUID="48ff1da7-2b29-4e1a-a131-015577223249"
	Sep 29 11:56:58 functional-345567 kubelet[11641]: E0929 11:56:58.710434   11641 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-drk25" podUID="ca84faec-7fe2-411c-964f-571eda825801"
	Sep 29 11:56:59 functional-345567 kubelet[11641]: E0929 11:56:59.713140   11641 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-jjzsx" podUID="5644d278-59ff-422e-baa9-b14a5238ef8f"
	Sep 29 11:57:00 functional-345567 kubelet[11641]: E0929 11:57:00.707766   11641 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="bf7fbbe7-176a-4572-843e-5c7514e63c62"
	Sep 29 11:57:03 functional-345567 kubelet[11641]: E0929 11:57:03.714694   11641 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-ltcz6" podUID="48ff1da7-2b29-4e1a-a131-015577223249"
	Sep 29 11:57:10 functional-345567 kubelet[11641]: E0929 11:57:10.710034   11641 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-drk25" podUID="ca84faec-7fe2-411c-964f-571eda825801"
	Sep 29 11:57:11 functional-345567 kubelet[11641]: E0929 11:57:11.708032   11641 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="bf7fbbe7-176a-4572-843e-5c7514e63c62"
	Sep 29 11:57:12 functional-345567 kubelet[11641]: E0929 11:57:12.710716   11641 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-jjzsx" podUID="5644d278-59ff-422e-baa9-b14a5238ef8f"
	Sep 29 11:57:17 functional-345567 kubelet[11641]: E0929 11:57:17.714532   11641 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-ltcz6" podUID="48ff1da7-2b29-4e1a-a131-015577223249"
	Sep 29 11:57:22 functional-345567 kubelet[11641]: E0929 11:57:22.707570   11641 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="bf7fbbe7-176a-4572-843e-5c7514e63c62"
	Sep 29 11:57:23 functional-345567 kubelet[11641]: E0929 11:57:23.713555   11641 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-drk25" podUID="ca84faec-7fe2-411c-964f-571eda825801"
	Sep 29 11:57:25 functional-345567 kubelet[11641]: E0929 11:57:25.713001   11641 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-jjzsx" podUID="5644d278-59ff-422e-baa9-b14a5238ef8f"
	Sep 29 11:57:32 functional-345567 kubelet[11641]: E0929 11:57:32.711000   11641 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-ltcz6" podUID="48ff1da7-2b29-4e1a-a131-015577223249"
	Sep 29 11:57:33 functional-345567 kubelet[11641]: E0929 11:57:33.707853   11641 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="bf7fbbe7-176a-4572-843e-5c7514e63c62"
	Sep 29 11:57:37 functional-345567 kubelet[11641]: E0929 11:57:37.712101   11641 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-jjzsx" podUID="5644d278-59ff-422e-baa9-b14a5238ef8f"
	Sep 29 11:57:38 functional-345567 kubelet[11641]: E0929 11:57:38.711456   11641 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-drk25" podUID="ca84faec-7fe2-411c-964f-571eda825801"
	Sep 29 11:57:43 functional-345567 kubelet[11641]: E0929 11:57:43.712725   11641 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-ltcz6" podUID="48ff1da7-2b29-4e1a-a131-015577223249"
	Sep 29 11:57:47 functional-345567 kubelet[11641]: E0929 11:57:47.707965   11641 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="bf7fbbe7-176a-4572-843e-5c7514e63c62"
	Sep 29 11:57:51 functional-345567 kubelet[11641]: E0929 11:57:51.718172   11641 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-jjzsx" podUID="5644d278-59ff-422e-baa9-b14a5238ef8f"
	Sep 29 11:57:52 functional-345567 kubelet[11641]: E0929 11:57:52.710676   11641 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-drk25" podUID="ca84faec-7fe2-411c-964f-571eda825801"
	
	
	==> storage-provisioner [411825215d27] <==
	W0929 11:57:31.811107       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:57:33.815709       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:57:33.821352       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:57:35.824596       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:57:35.834790       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:57:37.839982       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:57:37.845866       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:57:39.850168       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:57:39.856110       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:57:41.859682       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:57:41.868472       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:57:43.872171       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:57:43.878656       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:57:45.881927       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:57:45.890293       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:57:47.894606       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:57:47.901316       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:57:49.904699       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:57:49.910412       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:57:51.914478       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:57:51.924025       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:57:53.927795       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:57:53.932981       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:57:55.937881       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:57:55.948285       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [4441c251624a] <==
	I0929 11:52:06.520774       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0929 11:52:06.525886       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-345567 -n functional-345567
helpers_test.go:269: (dbg) Run:  kubectl --context functional-345567 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount mysql-5bb876957f-drk25 sp-pod dashboard-metrics-scraper-77bf4d6c4c-ltcz6 kubernetes-dashboard-855c9754f9-jjzsx
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/DashboardCmd]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-345567 describe pod busybox-mount mysql-5bb876957f-drk25 sp-pod dashboard-metrics-scraper-77bf4d6c4c-ltcz6 kubernetes-dashboard-855c9754f9-jjzsx
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context functional-345567 describe pod busybox-mount mysql-5bb876957f-drk25 sp-pod dashboard-metrics-scraper-77bf4d6c4c-ltcz6 kubernetes-dashboard-855c9754f9-jjzsx: exit status 1 (94.296799ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-345567/192.168.39.165
	Start Time:       Mon, 29 Sep 2025 11:52:49 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.15
	IPs:
	  IP:  10.244.0.15
	Containers:
	  mount-munger:
	    Container ID:  docker://ce94dd62b125d678401e70ac6f390e8514578a04c5df6bb8e5cd2d1e8ec1c46f
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      docker-pullable://gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Mon, 29 Sep 2025 11:52:51 +0000
	      Finished:     Mon, 29 Sep 2025 11:52:51 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-jclnd (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-jclnd:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  5m8s  default-scheduler  Successfully assigned default/busybox-mount to functional-345567
	  Normal  Pulling    5m7s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     5m6s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.578s (1.578s including waiting). Image size: 4403845 bytes.
	  Normal  Created    5m6s  kubelet            Created container: mount-munger
	  Normal  Started    5m6s  kubelet            Started container mount-munger
	
	
	Name:             mysql-5bb876957f-drk25
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-345567/192.168.39.165
	Start Time:       Mon, 29 Sep 2025 11:52:56 +0000
	Labels:           app=mysql
	                  pod-template-hash=5bb876957f
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.17
	IPs:
	  IP:           10.244.0.17
	Controlled By:  ReplicaSet/mysql-5bb876957f
	Containers:
	  mysql:
	    Container ID:   
	    Image:          docker.io/mysql:5.7
	    Image ID:       
	    Port:           3306/TCP (mysql)
	    Host Port:      0/TCP (mysql)
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-qv62j (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-qv62j:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  5m1s                  default-scheduler  Successfully assigned default/mysql-5bb876957f-drk25 to functional-345567
	  Normal   Pulling    113s (x5 over 5m)     kubelet            Pulling image "docker.io/mysql:5.7"
	  Warning  Failed     113s (x5 over 5m)     kubelet            Failed to pull image "docker.io/mysql:5.7": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     113s (x5 over 5m)     kubelet            Error: ErrImagePull
	  Warning  Failed     59s (x15 over 4m59s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    5s (x19 over 4m59s)   kubelet            Back-off pulling image "docker.io/mysql:5.7"
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-345567/192.168.39.165
	Start Time:       Mon, 29 Sep 2025 11:52:53 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.16
	IPs:
	  IP:  10.244.0.16
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-t797q (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-t797q:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  5m4s                  default-scheduler  Successfully assigned default/sp-pod to functional-345567
	  Warning  Failed     5m3s                  kubelet            Failed to pull image "docker.io/nginx": toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    116s (x5 over 5m3s)   kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     116s (x5 over 5m3s)   kubelet            Error: ErrImagePull
	  Warning  Failed     116s (x4 over 4m48s)  kubelet            Failed to pull image "docker.io/nginx": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     68s (x15 over 5m3s)   kubelet            Error: ImagePullBackOff
	  Normal   BackOff    10s (x20 over 5m3s)   kubelet            Back-off pulling image "docker.io/nginx"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-77bf4d6c4c-ltcz6" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-jjzsx" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context functional-345567 describe pod busybox-mount mysql-5bb876957f-drk25 sp-pod dashboard-metrics-scraper-77bf4d6c4c-ltcz6 kubernetes-dashboard-855c9754f9-jjzsx: exit status 1
E0929 11:58:18.895043  595293 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/addons-214441/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- FAIL: TestFunctional/parallel/DashboardCmd (302.04s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (370s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [c580c601-b180-499d-9c7b-1789d949bae7] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.008756046s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-345567 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-345567 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-345567 get pvc myclaim -o=json
I0929 11:52:52.145404  595293 retry.go:31] will retry after 1.224039209s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:a481a38b-04c7-495c-8e4e-301819998fca ResourceVersion:782 Generation:0 CreationTimestamp:2025-09-29 11:52:52 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName: StorageClassName:0xc001615c60 VolumeMode:0xc001615c70 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-345567 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-345567 apply -f testdata/storage-provisioner/pod.yaml
I0929 11:52:53.567492  595293 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [bf7fbbe7-176a-4572-843e-5c7514e63c62] Pending
helpers_test.go:352: "sp-pod" [bf7fbbe7-176a-4572-843e-5c7514e63c62] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "default" "test=storage-provisioner" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test_pvc_test.go:140: ***** TestFunctional/parallel/PersistentVolumeClaim: pod "test=storage-provisioner" failed to start within 6m0s: context deadline exceeded ****
functional_test_pvc_test.go:140: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-345567 -n functional-345567
functional_test_pvc_test.go:140: TestFunctional/parallel/PersistentVolumeClaim: showing logs for failed pods as of 2025-09-29 11:58:53.841073818 +0000 UTC m=+1719.995807499
functional_test_pvc_test.go:140: (dbg) Run:  kubectl --context functional-345567 describe po sp-pod -n default
functional_test_pvc_test.go:140: (dbg) kubectl --context functional-345567 describe po sp-pod -n default:
Name:             sp-pod
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-345567/192.168.39.165
Start Time:       Mon, 29 Sep 2025 11:52:53 +0000
Labels:           test=storage-provisioner
Annotations:      <none>
Status:           Pending
IP:               10.244.0.16
IPs:
IP:  10.244.0.16
Containers:
myfrontend:
Container ID:   
Image:          docker.io/nginx
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/tmp/mount from mypd (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-t797q (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
mypd:
Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName:  myclaim
ReadOnly:   false
kube-api-access-t797q:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                    From               Message
----     ------     ----                   ----               -------
Normal   Scheduled  6m                     default-scheduler  Successfully assigned default/sp-pod to functional-345567
Warning  Failed     5m59s                  kubelet            Failed to pull image "docker.io/nginx": toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   Pulling    2m52s (x5 over 5m59s)  kubelet            Pulling image "docker.io/nginx"
Warning  Failed     2m52s (x5 over 5m59s)  kubelet            Error: ErrImagePull
Warning  Failed     2m52s (x4 over 5m44s)  kubelet            Failed to pull image "docker.io/nginx": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   BackOff    52s (x21 over 5m59s)   kubelet            Back-off pulling image "docker.io/nginx"
Warning  Failed     52s (x21 over 5m59s)   kubelet            Error: ImagePullBackOff
functional_test_pvc_test.go:140: (dbg) Run:  kubectl --context functional-345567 logs sp-pod -n default
functional_test_pvc_test.go:140: (dbg) Non-zero exit: kubectl --context functional-345567 logs sp-pod -n default: exit status 1 (86.586892ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "myfrontend" in pod "sp-pod" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test_pvc_test.go:140: kubectl --context functional-345567 logs sp-pod -n default: exit status 1
functional_test_pvc_test.go:141: failed waiting for pvctest pod : test=storage-provisioner within 6m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-345567 -n functional-345567
helpers_test.go:252: <<< TestFunctional/parallel/PersistentVolumeClaim FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-345567 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-345567 logs -n 25: (1.132668606s)
helpers_test.go:260: TestFunctional/parallel/PersistentVolumeClaim logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                          ARGS                                                                           │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-345567 ssh findmnt -T /mount2                                                                                                                │ functional-345567 │ jenkins │ v1.37.0 │ 29 Sep 25 11:52 UTC │ 29 Sep 25 11:52 UTC │
	│ ssh            │ functional-345567 ssh findmnt -T /mount3                                                                                                                │ functional-345567 │ jenkins │ v1.37.0 │ 29 Sep 25 11:52 UTC │ 29 Sep 25 11:52 UTC │
	│ mount          │ -p functional-345567 --kill=true                                                                                                                        │ functional-345567 │ jenkins │ v1.37.0 │ 29 Sep 25 11:52 UTC │                     │
	│ image          │ functional-345567 image load --daemon kicbase/echo-server:functional-345567 --alsologtostderr                                                           │ functional-345567 │ jenkins │ v1.37.0 │ 29 Sep 25 11:52 UTC │ 29 Sep 25 11:53 UTC │
	│ image          │ functional-345567 image ls                                                                                                                              │ functional-345567 │ jenkins │ v1.37.0 │ 29 Sep 25 11:53 UTC │ 29 Sep 25 11:53 UTC │
	│ image          │ functional-345567 image load --daemon kicbase/echo-server:functional-345567 --alsologtostderr                                                           │ functional-345567 │ jenkins │ v1.37.0 │ 29 Sep 25 11:53 UTC │ 29 Sep 25 11:53 UTC │
	│ image          │ functional-345567 image ls                                                                                                                              │ functional-345567 │ jenkins │ v1.37.0 │ 29 Sep 25 11:53 UTC │ 29 Sep 25 11:53 UTC │
	│ image          │ functional-345567 image load --daemon kicbase/echo-server:functional-345567 --alsologtostderr                                                           │ functional-345567 │ jenkins │ v1.37.0 │ 29 Sep 25 11:53 UTC │ 29 Sep 25 11:53 UTC │
	│ image          │ functional-345567 image ls                                                                                                                              │ functional-345567 │ jenkins │ v1.37.0 │ 29 Sep 25 11:53 UTC │ 29 Sep 25 11:53 UTC │
	│ image          │ functional-345567 image save kicbase/echo-server:functional-345567 /home/jenkins/workspace/KVM_Linux_integration/echo-server-save.tar --alsologtostderr │ functional-345567 │ jenkins │ v1.37.0 │ 29 Sep 25 11:53 UTC │ 29 Sep 25 11:53 UTC │
	│ image          │ functional-345567 image rm kicbase/echo-server:functional-345567 --alsologtostderr                                                                      │ functional-345567 │ jenkins │ v1.37.0 │ 29 Sep 25 11:53 UTC │ 29 Sep 25 11:53 UTC │
	│ image          │ functional-345567 image ls                                                                                                                              │ functional-345567 │ jenkins │ v1.37.0 │ 29 Sep 25 11:53 UTC │ 29 Sep 25 11:53 UTC │
	│ image          │ functional-345567 image load /home/jenkins/workspace/KVM_Linux_integration/echo-server-save.tar --alsologtostderr                                       │ functional-345567 │ jenkins │ v1.37.0 │ 29 Sep 25 11:53 UTC │ 29 Sep 25 11:53 UTC │
	│ image          │ functional-345567 image ls                                                                                                                              │ functional-345567 │ jenkins │ v1.37.0 │ 29 Sep 25 11:53 UTC │ 29 Sep 25 11:53 UTC │
	│ image          │ functional-345567 image save --daemon kicbase/echo-server:functional-345567 --alsologtostderr                                                           │ functional-345567 │ jenkins │ v1.37.0 │ 29 Sep 25 11:53 UTC │ 29 Sep 25 11:53 UTC │
	│ update-context │ functional-345567 update-context --alsologtostderr -v=2                                                                                                 │ functional-345567 │ jenkins │ v1.37.0 │ 29 Sep 25 11:53 UTC │ 29 Sep 25 11:53 UTC │
	│ update-context │ functional-345567 update-context --alsologtostderr -v=2                                                                                                 │ functional-345567 │ jenkins │ v1.37.0 │ 29 Sep 25 11:53 UTC │ 29 Sep 25 11:53 UTC │
	│ update-context │ functional-345567 update-context --alsologtostderr -v=2                                                                                                 │ functional-345567 │ jenkins │ v1.37.0 │ 29 Sep 25 11:53 UTC │ 29 Sep 25 11:53 UTC │
	│ image          │ functional-345567 image ls --format short --alsologtostderr                                                                                             │ functional-345567 │ jenkins │ v1.37.0 │ 29 Sep 25 11:53 UTC │ 29 Sep 25 11:53 UTC │
	│ image          │ functional-345567 image ls --format yaml --alsologtostderr                                                                                              │ functional-345567 │ jenkins │ v1.37.0 │ 29 Sep 25 11:53 UTC │ 29 Sep 25 11:53 UTC │
	│ ssh            │ functional-345567 ssh pgrep buildkitd                                                                                                                   │ functional-345567 │ jenkins │ v1.37.0 │ 29 Sep 25 11:53 UTC │                     │
	│ image          │ functional-345567 image build -t localhost/my-image:functional-345567 testdata/build --alsologtostderr                                                  │ functional-345567 │ jenkins │ v1.37.0 │ 29 Sep 25 11:53 UTC │ 29 Sep 25 11:53 UTC │
	│ image          │ functional-345567 image ls                                                                                                                              │ functional-345567 │ jenkins │ v1.37.0 │ 29 Sep 25 11:53 UTC │ 29 Sep 25 11:53 UTC │
	│ image          │ functional-345567 image ls --format json --alsologtostderr                                                                                              │ functional-345567 │ jenkins │ v1.37.0 │ 29 Sep 25 11:53 UTC │ 29 Sep 25 11:53 UTC │
	│ image          │ functional-345567 image ls --format table --alsologtostderr                                                                                             │ functional-345567 │ jenkins │ v1.37.0 │ 29 Sep 25 11:53 UTC │ 29 Sep 25 11:53 UTC │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/29 11:52:54
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0929 11:52:54.886007  608412 out.go:360] Setting OutFile to fd 1 ...
	I0929 11:52:54.886275  608412 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 11:52:54.886285  608412 out.go:374] Setting ErrFile to fd 2...
	I0929 11:52:54.886290  608412 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 11:52:54.886575  608412 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21654-591397/.minikube/bin
	I0929 11:52:54.887080  608412 out.go:368] Setting JSON to false
	I0929 11:52:54.888152  608412 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":5723,"bootTime":1759141052,"procs":254,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0929 11:52:54.888257  608412 start.go:140] virtualization: kvm guest
	I0929 11:52:54.890356  608412 out.go:179] * [functional-345567] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I0929 11:52:54.891776  608412 notify.go:220] Checking for updates...
	I0929 11:52:54.891846  608412 out.go:179]   - MINIKUBE_LOCATION=21654
	I0929 11:52:54.893445  608412 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0929 11:52:54.894736  608412 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21654-591397/kubeconfig
	I0929 11:52:54.896027  608412 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21654-591397/.minikube
	I0929 11:52:54.897194  608412 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0929 11:52:54.898527  608412 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0929 11:52:54.901462  608412 config.go:182] Loaded profile config "functional-345567": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0929 11:52:54.902092  608412 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:52:54.902190  608412 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:52:54.918838  608412 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34729
	I0929 11:52:54.919337  608412 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:52:54.919911  608412 main.go:141] libmachine: Using API Version  1
	I0929 11:52:54.919942  608412 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:52:54.920387  608412 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:52:54.920611  608412 main.go:141] libmachine: (functional-345567) Calling .DriverName
	I0929 11:52:54.920900  608412 driver.go:421] Setting default libvirt URI to qemu:///system
	I0929 11:52:54.921299  608412 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:52:54.921348  608412 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:52:54.936850  608412 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39265
	I0929 11:52:54.937516  608412 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:52:54.938257  608412 main.go:141] libmachine: Using API Version  1
	I0929 11:52:54.938293  608412 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:52:54.938784  608412 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:52:54.939026  608412 main.go:141] libmachine: (functional-345567) Calling .DriverName
	I0929 11:52:54.980510  608412 out.go:179] * Utilisation du pilote kvm2 basé sur le profil existant
	I0929 11:52:54.981656  608412 start.go:304] selected driver: kvm2
	I0929 11:52:54.981676  608412 start.go:924] validating driver "kvm2" against &{Name:functional-345567 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 Clu
sterName:functional-345567 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.165 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVe
rsion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 11:52:54.981806  608412 start.go:935] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0929 11:52:54.984075  608412 out.go:203] 
	W0929 11:52:54.986131  608412 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0929 11:52:54.987384  608412 out.go:203] 
	
	
	==> Docker <==
	Sep 29 11:53:37 functional-345567 dockerd[8452]: time="2025-09-29T11:53:37.828468447Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Sep 29 11:53:37 functional-345567 dockerd[8452]: time="2025-09-29T11:53:37.866527290Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 11:53:38 functional-345567 dockerd[8452]: time="2025-09-29T11:53:38.831836323Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 11:53:38 functional-345567 dockerd[8452]: time="2025-09-29T11:53:38.940708907Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 11:54:18 functional-345567 dockerd[8452]: time="2025-09-29T11:54:18.737064767Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Sep 29 11:54:18 functional-345567 dockerd[8452]: time="2025-09-29T11:54:18.847775644Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 11:54:18 functional-345567 cri-dockerd[9434]: time="2025-09-29T11:54:18Z" level=info msg="Stop pulling image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: Pulling from kubernetesui/dashboard"
	Sep 29 11:54:29 functional-345567 dockerd[8452]: time="2025-09-29T11:54:29.742977658Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Sep 29 11:54:29 functional-345567 dockerd[8452]: time="2025-09-29T11:54:29.789718835Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 11:54:30 functional-345567 dockerd[8452]: time="2025-09-29T11:54:30.812677558Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 11:54:31 functional-345567 dockerd[8452]: time="2025-09-29T11:54:31.810597682Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 11:55:50 functional-345567 dockerd[8452]: time="2025-09-29T11:55:50.739945495Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Sep 29 11:55:50 functional-345567 dockerd[8452]: time="2025-09-29T11:55:50.784699281Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 11:55:55 functional-345567 dockerd[8452]: time="2025-09-29T11:55:55.734875184Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Sep 29 11:55:55 functional-345567 dockerd[8452]: time="2025-09-29T11:55:55.777484806Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 11:56:01 functional-345567 dockerd[8452]: time="2025-09-29T11:56:01.838836743Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 11:56:04 functional-345567 dockerd[8452]: time="2025-09-29T11:56:04.829637369Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 11:58:31 functional-345567 dockerd[8452]: time="2025-09-29T11:58:31.739347822Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Sep 29 11:58:31 functional-345567 dockerd[8452]: time="2025-09-29T11:58:31.841406041Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 11:58:31 functional-345567 cri-dockerd[9434]: time="2025-09-29T11:58:31Z" level=info msg="Stop pulling image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: Pulling from kubernetesui/dashboard"
	Sep 29 11:58:38 functional-345567 dockerd[8452]: time="2025-09-29T11:58:38.732849896Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Sep 29 11:58:38 functional-345567 dockerd[8452]: time="2025-09-29T11:58:38.777079011Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 11:58:48 functional-345567 dockerd[8452]: time="2025-09-29T11:58:48.816028647Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 11:58:52 functional-345567 dockerd[8452]: time="2025-09-29T11:58:52.884013029Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 11:58:52 functional-345567 cri-dockerd[9434]: time="2025-09-29T11:58:52Z" level=info msg="Stop pulling image docker.io/nginx:latest: latest: Pulling from library/nginx"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	ce94dd62b125d       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   6 minutes ago       Exited              mount-munger              0                   8d7f0bfdf9cfb       busybox-mount
	4be86a79d09d0       kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6           6 minutes ago       Running             echo-server               0                   947dcb252dd05       hello-node-connect-7d85dfc575-lrm8c
	71dea8f862b81       kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6           6 minutes ago       Running             echo-server               0                   72daa0283c9ef       hello-node-75c85bcc94-xr87t
	50a0da838737e       52546a367cc9e                                                                                         6 minutes ago       Running             coredns                   5                   4049429ce4236       coredns-66bc5c9577-xk7nm
	411825215d27e       6e38f40d628db                                                                                         6 minutes ago       Running             storage-provisioner       4                   563027a986f3d       storage-provisioner
	b32b19fdb12c8       df0860106674d                                                                                         6 minutes ago       Running             kube-proxy                4                   22518e2355969       kube-proxy-2fqpd
	90204288ee92a       90550c43ad2bc                                                                                         6 minutes ago       Running             kube-apiserver            0                   23505abfd486b       kube-apiserver-functional-345567
	ea4a881719e9b       a0af72f2ec6d6                                                                                         6 minutes ago       Running             kube-controller-manager   4                   6f92ee8d98831       kube-controller-manager-functional-345567
	2722204aec368       52546a367cc9e                                                                                         6 minutes ago       Running             coredns                   3                   c741d758f049d       coredns-66bc5c9577-mjdq6
	3e5e6adba4ebb       5f1f5298c888d                                                                                         6 minutes ago       Running             etcd                      3                   afdcfcc8dc192       etcd-functional-345567
	bd144e0b1825e       46169d968e920                                                                                         6 minutes ago       Running             kube-scheduler            4                   e9341f8a976df       kube-scheduler-functional-345567
	d71ede638e4d2       52546a367cc9e                                                                                         6 minutes ago       Exited              coredns                   4                   30b0133e782e5       coredns-66bc5c9577-xk7nm
	74103287cd23b       46169d968e920                                                                                         6 minutes ago       Exited              kube-scheduler            3                   4ec944c1a51c9       kube-scheduler-functional-345567
	7d3198e132f24       a0af72f2ec6d6                                                                                         6 minutes ago       Exited              kube-controller-manager   3                   86172d5e4ed87       kube-controller-manager-functional-345567
	4441c251624ac       6e38f40d628db                                                                                         6 minutes ago       Exited              storage-provisioner       3                   8deae7d11ce0d       storage-provisioner
	d17e345f5764f       df0860106674d                                                                                         6 minutes ago       Exited              kube-proxy                3                   9c5baa8d8ef07       kube-proxy-2fqpd
	ed7eee2023740       52546a367cc9e                                                                                         7 minutes ago       Exited              coredns                   2                   a5aae7bb8491c       coredns-66bc5c9577-mjdq6
	976b2c11ea333       5f1f5298c888d                                                                                         7 minutes ago       Exited              etcd                      2                   ceb47995d56fe       etcd-functional-345567
	
	
	==> coredns [2722204aec36] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:58746 - 55120 "HINFO IN 7801123286633978322.7662679228127234237. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.036452724s
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [50a0da838737] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:38522 - 50589 "HINFO IN 7299539350853405645.7723234432700575792. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.020792926s
	
	
	==> coredns [d71ede638e4d] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:53869 - 12162 "HINFO IN 3342720793649580752.1000981730392323068. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.417393383s
	
	
	==> coredns [ed7eee202374] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:36287 - 17292 "HINFO IN 3402359101510948574.979122807022581316. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.023865412s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-345567
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-345567
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b8c533eb3dce8338d3d4e7231ea97d1c44ed6f81
	                    minikube.k8s.io/name=functional-345567
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_29T11_50_06_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Sep 2025 11:50:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-345567
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Sep 2025 11:58:51 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Sep 2025 11:58:40 +0000   Mon, 29 Sep 2025 11:49:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Sep 2025 11:58:40 +0000   Mon, 29 Sep 2025 11:49:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Sep 2025 11:58:40 +0000   Mon, 29 Sep 2025 11:49:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Sep 2025 11:58:40 +0000   Mon, 29 Sep 2025 11:50:11 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.165
	  Hostname:    functional-345567
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008596Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008596Ki
	  pods:               110
	System Info:
	  Machine ID:                 8559d0fdeb664bce82856171ffe07f7f
	  System UUID:                8559d0fd-eb66-4bce-8285-6171ffe07f7f
	  Boot ID:                    fc84bad7-00d6-47c0-8939-3febc52a0433
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://28.4.0
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-xr87t                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m9s
	  default                     hello-node-connect-7d85dfc575-lrm8c           0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m9s
	  default                     mysql-5bb876957f-drk25                        600m (30%)    700m (35%)  512Mi (13%)      700Mi (17%)    5m58s
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m1s
	  kube-system                 coredns-66bc5c9577-mjdq6                      100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     8m44s
	  kube-system                 coredns-66bc5c9577-xk7nm                      100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     8m44s
	  kube-system                 etcd-functional-345567                        100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         8m49s
	  kube-system                 kube-apiserver-functional-345567              250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m31s
	  kube-system                 kube-controller-manager-functional-345567     200m (10%)    0 (0%)      0 (0%)           0 (0%)         8m49s
	  kube-system                 kube-proxy-2fqpd                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m44s
	  kube-system                 kube-scheduler-functional-345567              100m (5%)     0 (0%)      0 (0%)           0 (0%)         8m49s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m43s
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-ltcz6    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m57s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-jjzsx         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m57s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1450m (72%)  700m (35%)
	  memory             752Mi (19%)  1040Mi (26%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 8m43s                  kube-proxy       
	  Normal  Starting                 6m30s                  kube-proxy       
	  Normal  Starting                 7m36s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  8m56s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    8m56s (x8 over 8m56s)  kubelet          Node functional-345567 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m56s (x7 over 8m56s)  kubelet          Node functional-345567 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  8m56s (x8 over 8m56s)  kubelet          Node functional-345567 status is now: NodeHasSufficientMemory
	  Normal  Starting                 8m49s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  8m49s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  8m49s                  kubelet          Node functional-345567 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m49s                  kubelet          Node functional-345567 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m49s                  kubelet          Node functional-345567 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           8m45s                  node-controller  Node functional-345567 event: Registered Node functional-345567 in Controller
	  Normal  NodeReady                8m43s                  kubelet          Node functional-345567 status is now: NodeReady
	  Normal  Starting                 7m41s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  7m41s (x8 over 7m41s)  kubelet          Node functional-345567 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m41s (x8 over 7m41s)  kubelet          Node functional-345567 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m41s (x7 over 7m41s)  kubelet          Node functional-345567 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m41s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           7m34s                  node-controller  Node functional-345567 event: Registered Node functional-345567 in Controller
	  Normal  Starting                 6m35s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m35s (x8 over 6m35s)  kubelet          Node functional-345567 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m35s (x8 over 6m35s)  kubelet          Node functional-345567 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m35s (x7 over 6m35s)  kubelet          Node functional-345567 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m35s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           6m28s                  node-controller  Node functional-345567 event: Registered Node functional-345567 in Controller
	
	
	==> dmesg <==
	[  +0.108237] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.116386] kauditd_printk_skb: 373 callbacks suppressed
	[  +0.102475] kauditd_printk_skb: 205 callbacks suppressed
	[Sep29 11:50] kauditd_printk_skb: 165 callbacks suppressed
	[  +1.045902] kauditd_printk_skb: 19 callbacks suppressed
	[  +6.979507] kauditd_printk_skb: 270 callbacks suppressed
	[  +0.189723] kauditd_printk_skb: 2 callbacks suppressed
	[ +20.191182] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.446606] kauditd_printk_skb: 22 callbacks suppressed
	[Sep29 11:51] kauditd_printk_skb: 515 callbacks suppressed
	[  +0.000045] kauditd_printk_skb: 106 callbacks suppressed
	[  +4.857046] kauditd_printk_skb: 111 callbacks suppressed
	[  +7.559965] kauditd_printk_skb: 98 callbacks suppressed
	[ +15.196663] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.479035] kauditd_printk_skb: 22 callbacks suppressed
	[Sep29 11:52] kauditd_printk_skb: 470 callbacks suppressed
	[  +0.000025] kauditd_printk_skb: 178 callbacks suppressed
	[  +4.255392] kauditd_printk_skb: 66 callbacks suppressed
	[  +6.794943] kauditd_printk_skb: 84 callbacks suppressed
	[  +4.437328] kauditd_printk_skb: 2 callbacks suppressed
	[  +1.623177] kauditd_printk_skb: 91 callbacks suppressed
	[  +1.873208] kauditd_printk_skb: 146 callbacks suppressed
	[  +2.698217] kauditd_printk_skb: 79 callbacks suppressed
	[Sep29 11:53] crun[14706]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set
	[  +0.000093] kauditd_printk_skb: 104 callbacks suppressed
	
	
	==> etcd [3e5e6adba4eb] <==
	{"level":"warn","ts":"2025-09-29T11:52:21.797724Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55150","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:52:21.818110Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55166","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:52:21.834131Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55194","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:52:21.847879Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55210","server-name":"","error":"read tcp 127.0.0.1:2379->127.0.0.1:55210: read: connection reset by peer"}
	{"level":"warn","ts":"2025-09-29T11:52:21.870351Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55228","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:52:21.881434Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55246","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:52:21.896048Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55264","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:52:21.918341Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55290","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:52:21.930053Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55308","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:52:21.941528Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55352","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:52:21.952839Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55354","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:52:21.972455Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55382","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:52:21.985991Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55398","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:52:22.009906Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55408","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:52:22.016610Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55416","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:52:22.027497Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55430","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:52:22.039261Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55434","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:52:22.062312Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55454","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:52:22.072788Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55458","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:52:22.100425Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55466","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:52:22.135296Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55490","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:52:22.147547Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55508","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:52:22.169177Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55530","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:52:22.180777Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55546","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:52:22.247522Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55566","server-name":"","error":"EOF"}
	
	
	==> etcd [976b2c11ea33] <==
	{"level":"warn","ts":"2025-09-29T11:51:16.188492Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43472","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:51:16.199134Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43484","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:51:16.226491Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43504","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:51:16.235921Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43516","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:51:16.247338Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43532","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:51:16.254037Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43556","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:51:16.305662Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43568","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-29T11:51:53.117016Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-09-29T11:51:53.117110Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-345567","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.165:2380"],"advertise-client-urls":["https://192.168.39.165:2379"]}
	{"level":"error","ts":"2025-09-29T11:51:53.117211Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-09-29T11:51:53.119444Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-09-29T11:52:00.125423Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-29T11:52:00.125528Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"ffc3b7517aaad9f6","current-leader-member-id":"ffc3b7517aaad9f6"}
	{"level":"info","ts":"2025-09-29T11:52:00.127672Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-09-29T11:52:00.127714Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-09-29T11:52:00.129571Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-29T11:52:00.129658Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-09-29T11:52:00.129668Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-09-29T11:52:00.129704Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.165:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-29T11:52:00.130178Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.165:2379: use of closed network connection"}
	{"level":"error","ts":"2025-09-29T11:52:00.130318Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.165:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-29T11:52:00.133561Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.39.165:2380"}
	{"level":"error","ts":"2025-09-29T11:52:00.133635Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.165:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-29T11:52:00.133811Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.39.165:2380"}
	{"level":"info","ts":"2025-09-29T11:52:00.133909Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-345567","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.165:2380"],"advertise-client-urls":["https://192.168.39.165:2379"]}
	
	
	==> kernel <==
	 11:58:55 up 9 min,  0 users,  load average: 0.08, 0.48, 0.38
	Linux functional-345567 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Sep 18 15:48:18 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [90204288ee92] <==
	I0929 11:52:23.851417       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0929 11:52:25.081692       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I0929 11:52:25.175039       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I0929 11:52:25.228450       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0929 11:52:25.242820       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0929 11:52:26.499423       1 controller.go:667] quota admission added evaluator for: endpoints
	I0929 11:52:26.551063       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0929 11:52:26.648056       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0929 11:52:40.519895       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.96.248.96"}
	I0929 11:52:45.183111       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.111.97.77"}
	I0929 11:52:46.101049       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.102.121.36"}
	I0929 11:52:56.142766       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.99.90.196"}
	I0929 11:52:56.718813       1 controller.go:667] quota admission added evaluator for: namespaces
	I0929 11:52:57.321973       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.105.24.8"}
	I0929 11:52:57.354836       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.100.5.50"}
	I0929 11:53:29.416192       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:53:51.223561       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:54:45.491383       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:55:09.464467       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:55:54.861137       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:56:17.689709       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:57:17.511932       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:57:27.615807       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:58:23.497988       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:58:49.198750       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	
	
	==> kube-controller-manager [7d3198e132f2] <==
	I0929 11:52:08.174197       1 serving.go:386] Generated self-signed cert in-memory
	
	
	==> kube-controller-manager [ea4a881719e9] <==
	I0929 11:52:26.249165       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-345567"
	I0929 11:52:26.249423       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I0929 11:52:26.245940       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I0929 11:52:26.250150       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0929 11:52:26.250635       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I0929 11:52:26.252617       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I0929 11:52:26.258588       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I0929 11:52:26.260291       1 shared_informer.go:356] "Caches are synced" controller="job"
	I0929 11:52:26.267491       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I0929 11:52:26.271425       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I0929 11:52:26.271617       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I0929 11:52:26.323832       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0929 11:52:26.345201       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0929 11:52:26.345283       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0929 11:52:26.345291       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	E0929 11:52:56.979172       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0929 11:52:56.991484       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0929 11:52:57.014127       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0929 11:52:57.015058       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0929 11:52:57.035545       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0929 11:52:57.047483       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0929 11:52:57.072110       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0929 11:52:57.073964       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0929 11:52:57.094822       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0929 11:52:57.095756       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-proxy [b32b19fdb12c] <==
	I0929 11:52:24.634935       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0929 11:52:24.735066       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0929 11:52:24.735134       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.165"]
	E0929 11:52:24.735202       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0929 11:52:24.865013       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I0929 11:52:24.865063       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0929 11:52:24.865086       1 server_linux.go:132] "Using iptables Proxier"
	I0929 11:52:24.938859       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0929 11:52:24.939791       1 server.go:527] "Version info" version="v1.34.0"
	I0929 11:52:24.939807       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 11:52:24.951143       1 config.go:200] "Starting service config controller"
	I0929 11:52:24.951508       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0929 11:52:24.951756       1 config.go:106] "Starting endpoint slice config controller"
	I0929 11:52:24.951874       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0929 11:52:24.952014       1 config.go:403] "Starting serviceCIDR config controller"
	I0929 11:52:24.952140       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0929 11:52:24.955349       1 config.go:309] "Starting node config controller"
	I0929 11:52:24.955671       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0929 11:52:24.955939       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0929 11:52:25.051884       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0929 11:52:25.056127       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0929 11:52:25.056143       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [d17e345f5764] <==
	I0929 11:52:06.589363       1 server_linux.go:53] "Using iptables proxy"
	I0929 11:52:06.688192       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	E0929 11:52:06.689895       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-345567&limit=500&resourceVersion=0\": dial tcp 192.168.39.165:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0929 11:52:08.135725       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-345567&limit=500&resourceVersion=0\": dial tcp 192.168.39.165:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	
	
	==> kube-scheduler [74103287cd23] <==
	I0929 11:52:08.685017       1 serving.go:386] Generated self-signed cert in-memory
	
	
	==> kube-scheduler [bd144e0b1825] <==
	E0929 11:52:15.295690       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: Get \"https://192.168.39.165:8441/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.39.165:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0929 11:52:15.396950       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: Get \"https://192.168.39.165:8441/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.39.165:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0929 11:52:15.504843       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: Get \"https://192.168.39.165:8441/apis/resource.k8s.io/v1/resourceslices?limit=500&resourceVersion=0\": dial tcp 192.168.39.165:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E0929 11:52:15.650564       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: Get \"https://192.168.39.165:8441/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.39.165:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0929 11:52:15.686797       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.39.165:8441/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.39.165:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0929 11:52:18.377371       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: Get \"https://192.168.39.165:8441/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.39.165:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0929 11:52:18.816578       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: Get \"https://192.168.39.165:8441/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.39.165:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0929 11:52:19.067097       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: Get \"https://192.168.39.165:8441/apis/storage.k8s.io/v1/volumeattachments?limit=500&resourceVersion=0\": dial tcp 192.168.39.165:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0929 11:52:19.370328       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: Get \"https://192.168.39.165:8441/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.39.165:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0929 11:52:19.519200       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.39.165:8441/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.39.165:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0929 11:52:19.683466       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: Get \"https://192.168.39.165:8441/apis/resource.k8s.io/v1/deviceclasses?limit=500&resourceVersion=0\": dial tcp 192.168.39.165:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E0929 11:52:19.946451       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: Get \"https://192.168.39.165:8441/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.39.165:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0929 11:52:19.952321       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: Get \"https://192.168.39.165:8441/apis/resource.k8s.io/v1/resourceclaims?limit=500&resourceVersion=0\": dial tcp 192.168.39.165:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E0929 11:52:20.058766       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: Get \"https://192.168.39.165:8441/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.39.165:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0929 11:52:20.136061       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.39.165:8441/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.39.165:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0929 11:52:20.510461       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.39.165:8441/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.39.165:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0929 11:52:22.918741       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E0929 11:52:22.918756       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0929 11:52:22.918912       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0929 11:52:22.919195       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0929 11:52:22.920003       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0929 11:52:22.921510       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0929 11:52:22.923086       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E0929 11:52:22.923604       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	I0929 11:52:30.132408       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 29 11:58:14 functional-345567 kubelet[11641]: E0929 11:58:14.707842   11641 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="bf7fbbe7-176a-4572-843e-5c7514e63c62"
	Sep 29 11:58:18 functional-345567 kubelet[11641]: E0929 11:58:18.710075   11641 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-jjzsx" podUID="5644d278-59ff-422e-baa9-b14a5238ef8f"
	Sep 29 11:58:21 functional-345567 kubelet[11641]: E0929 11:58:21.710847   11641 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-drk25" podUID="ca84faec-7fe2-411c-964f-571eda825801"
	Sep 29 11:58:24 functional-345567 kubelet[11641]: E0929 11:58:24.711630   11641 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-ltcz6" podUID="48ff1da7-2b29-4e1a-a131-015577223249"
	Sep 29 11:58:28 functional-345567 kubelet[11641]: E0929 11:58:28.708322   11641 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="bf7fbbe7-176a-4572-843e-5c7514e63c62"
	Sep 29 11:58:31 functional-345567 kubelet[11641]: E0929 11:58:31.845617   11641 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Sep 29 11:58:31 functional-345567 kubelet[11641]: E0929 11:58:31.845668   11641 kuberuntime_image.go:43] "Failed to pull image" err="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Sep 29 11:58:31 functional-345567 kubelet[11641]: E0929 11:58:31.845732   11641 kuberuntime_manager.go:1449] "Unhandled Error" err="container kubernetes-dashboard start failed in pod kubernetes-dashboard-855c9754f9-jjzsx_kubernetes-dashboard(5644d278-59ff-422e-baa9-b14a5238ef8f): ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Sep 29 11:58:31 functional-345567 kubelet[11641]: E0929 11:58:31.845764   11641 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ErrImagePull: \"toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-jjzsx" podUID="5644d278-59ff-422e-baa9-b14a5238ef8f"
	Sep 29 11:58:35 functional-345567 kubelet[11641]: E0929 11:58:35.714984   11641 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-drk25" podUID="ca84faec-7fe2-411c-964f-571eda825801"
	Sep 29 11:58:38 functional-345567 kubelet[11641]: E0929 11:58:38.781111   11641 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Sep 29 11:58:38 functional-345567 kubelet[11641]: E0929 11:58:38.781181   11641 kuberuntime_image.go:43] "Failed to pull image" err="Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Sep 29 11:58:38 functional-345567 kubelet[11641]: E0929 11:58:38.781320   11641 kuberuntime_manager.go:1449] "Unhandled Error" err="container dashboard-metrics-scraper start failed in pod dashboard-metrics-scraper-77bf4d6c4c-ltcz6_kubernetes-dashboard(48ff1da7-2b29-4e1a-a131-015577223249): ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Sep 29 11:58:38 functional-345567 kubelet[11641]: E0929 11:58:38.781390   11641 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ErrImagePull: \"Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-ltcz6" podUID="48ff1da7-2b29-4e1a-a131-015577223249"
	Sep 29 11:58:39 functional-345567 kubelet[11641]: E0929 11:58:39.708976   11641 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="bf7fbbe7-176a-4572-843e-5c7514e63c62"
	Sep 29 11:58:45 functional-345567 kubelet[11641]: E0929 11:58:45.711589   11641 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-jjzsx" podUID="5644d278-59ff-422e-baa9-b14a5238ef8f"
	Sep 29 11:58:48 functional-345567 kubelet[11641]: E0929 11:58:48.819796   11641 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/mysql:5.7"
	Sep 29 11:58:48 functional-345567 kubelet[11641]: E0929 11:58:48.819847   11641 kuberuntime_image.go:43] "Failed to pull image" err="Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/mysql:5.7"
	Sep 29 11:58:48 functional-345567 kubelet[11641]: E0929 11:58:48.820169   11641 kuberuntime_manager.go:1449] "Unhandled Error" err="container mysql start failed in pod mysql-5bb876957f-drk25_default(ca84faec-7fe2-411c-964f-571eda825801): ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Sep 29 11:58:48 functional-345567 kubelet[11641]: E0929 11:58:48.820209   11641 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ErrImagePull: \"Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-drk25" podUID="ca84faec-7fe2-411c-964f-571eda825801"
	Sep 29 11:58:50 functional-345567 kubelet[11641]: E0929 11:58:50.711475   11641 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-ltcz6" podUID="48ff1da7-2b29-4e1a-a131-015577223249"
	Sep 29 11:58:52 functional-345567 kubelet[11641]: E0929 11:58:52.889842   11641 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Sep 29 11:58:52 functional-345567 kubelet[11641]: E0929 11:58:52.889884   11641 kuberuntime_image.go:43] "Failed to pull image" err="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Sep 29 11:58:52 functional-345567 kubelet[11641]: E0929 11:58:52.890095   11641 kuberuntime_manager.go:1449] "Unhandled Error" err="container myfrontend start failed in pod sp-pod_default(bf7fbbe7-176a-4572-843e-5c7514e63c62): ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Sep 29 11:58:52 functional-345567 kubelet[11641]: E0929 11:58:52.890123   11641 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ErrImagePull: \"toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="bf7fbbe7-176a-4572-843e-5c7514e63c62"
	
	
	==> storage-provisioner [411825215d27] <==
	W0929 11:58:30.145075       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:58:32.149104       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:58:32.155085       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:58:34.163737       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:58:34.169880       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:58:36.174072       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:58:36.179830       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:58:38.183713       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:58:38.188882       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:58:40.192754       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:58:40.198588       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:58:42.205540       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:58:42.215213       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:58:44.218581       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:58:44.223888       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:58:46.228999       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:58:46.237812       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:58:48.242391       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:58:48.248018       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:58:50.251653       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:58:50.257140       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:58:52.260915       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:58:52.266817       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:58:54.273365       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:58:54.282805       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [4441c251624a] <==
	I0929 11:52:06.520774       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0929 11:52:06.525886       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-345567 -n functional-345567
helpers_test.go:269: (dbg) Run:  kubectl --context functional-345567 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount mysql-5bb876957f-drk25 sp-pod dashboard-metrics-scraper-77bf4d6c4c-ltcz6 kubernetes-dashboard-855c9754f9-jjzsx
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-345567 describe pod busybox-mount mysql-5bb876957f-drk25 sp-pod dashboard-metrics-scraper-77bf4d6c4c-ltcz6 kubernetes-dashboard-855c9754f9-jjzsx
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context functional-345567 describe pod busybox-mount mysql-5bb876957f-drk25 sp-pod dashboard-metrics-scraper-77bf4d6c4c-ltcz6 kubernetes-dashboard-855c9754f9-jjzsx: exit status 1 (94.139161ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-345567/192.168.39.165
	Start Time:       Mon, 29 Sep 2025 11:52:49 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.15
	IPs:
	  IP:  10.244.0.15
	Containers:
	  mount-munger:
	    Container ID:  docker://ce94dd62b125d678401e70ac6f390e8514578a04c5df6bb8e5cd2d1e8ec1c46f
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      docker-pullable://gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Mon, 29 Sep 2025 11:52:51 +0000
	      Finished:     Mon, 29 Sep 2025 11:52:51 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-jclnd (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-jclnd:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  6m6s  default-scheduler  Successfully assigned default/busybox-mount to functional-345567
	  Normal  Pulling    6m5s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     6m4s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.578s (1.578s including waiting). Image size: 4403845 bytes.
	  Normal  Created    6m4s  kubelet            Created container: mount-munger
	  Normal  Started    6m4s  kubelet            Started container mount-munger
	
	
	Name:             mysql-5bb876957f-drk25
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-345567/192.168.39.165
	Start Time:       Mon, 29 Sep 2025 11:52:56 +0000
	Labels:           app=mysql
	                  pod-template-hash=5bb876957f
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.17
	IPs:
	  IP:           10.244.0.17
	Controlled By:  ReplicaSet/mysql-5bb876957f
	Containers:
	  mysql:
	    Container ID:   
	    Image:          docker.io/mysql:5.7
	    Image ID:       
	    Port:           3306/TCP (mysql)
	    Host Port:      0/TCP (mysql)
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-qv62j (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-qv62j:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  5m59s                  default-scheduler  Successfully assigned default/mysql-5bb876957f-drk25 to functional-345567
	  Normal   Pulling    2m51s (x5 over 5m58s)  kubelet            Pulling image "docker.io/mysql:5.7"
	  Warning  Failed     2m51s (x5 over 5m58s)  kubelet            Failed to pull image "docker.io/mysql:5.7": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     2m51s (x5 over 5m58s)  kubelet            Error: ErrImagePull
	  Warning  Failed     49s (x20 over 5m57s)   kubelet            Error: ImagePullBackOff
	  Normal   BackOff    34s (x21 over 5m57s)   kubelet            Back-off pulling image "docker.io/mysql:5.7"
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-345567/192.168.39.165
	Start Time:       Mon, 29 Sep 2025 11:52:53 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.16
	IPs:
	  IP:  10.244.0.16
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-t797q (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-t797q:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  6m2s                   default-scheduler  Successfully assigned default/sp-pod to functional-345567
	  Warning  Failed     6m1s                   kubelet            Failed to pull image "docker.io/nginx": toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    2m54s (x5 over 6m1s)   kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     2m54s (x5 over 6m1s)   kubelet            Error: ErrImagePull
	  Warning  Failed     2m54s (x4 over 5m46s)  kubelet            Failed to pull image "docker.io/nginx": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    54s (x21 over 6m1s)    kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     54s (x21 over 6m1s)    kubelet            Error: ImagePullBackOff

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-77bf4d6c4c-ltcz6" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-jjzsx" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context functional-345567 describe pod busybox-mount mysql-5bb876957f-drk25 sp-pod dashboard-metrics-scraper-77bf4d6c4c-ltcz6 kubernetes-dashboard-855c9754f9-jjzsx: exit status 1
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (370.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (602.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-345567 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-5bb876957f-drk25" [ca84faec-7fe2-411c-964f-571eda825801] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
functional_test.go:1804: ***** TestFunctional/parallel/MySQL: pod "app=mysql" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1804: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-345567 -n functional-345567
functional_test.go:1804: TestFunctional/parallel/MySQL: showing logs for failed pods as of 2025-09-29 12:02:56.506445257 +0000 UTC m=+1962.661178931
functional_test.go:1804: (dbg) Run:  kubectl --context functional-345567 describe po mysql-5bb876957f-drk25 -n default
functional_test.go:1804: (dbg) kubectl --context functional-345567 describe po mysql-5bb876957f-drk25 -n default:
Name:             mysql-5bb876957f-drk25
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-345567/192.168.39.165
Start Time:       Mon, 29 Sep 2025 11:52:56 +0000
Labels:           app=mysql
pod-template-hash=5bb876957f
Annotations:      <none>
Status:           Pending
IP:               10.244.0.17
IPs:
IP:           10.244.0.17
Controlled By:  ReplicaSet/mysql-5bb876957f
Containers:
mysql:
Container ID:   
Image:          docker.io/mysql:5.7
Image ID:       
Port:           3306/TCP (mysql)
Host Port:      0/TCP (mysql)
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Limits:
cpu:     700m
memory:  700Mi
Requests:
cpu:     600m
memory:  512Mi
Environment:
MYSQL_ROOT_PASSWORD:  password
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-qv62j (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-qv62j:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/mysql-5bb876957f-drk25 to functional-345567
Normal   Pulling    6m52s (x5 over 9m59s)   kubelet            Pulling image "docker.io/mysql:5.7"
Warning  Failed     6m52s (x5 over 9m59s)   kubelet            Failed to pull image "docker.io/mysql:5.7": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     6m52s (x5 over 9m59s)   kubelet            Error: ErrImagePull
Warning  Failed     4m50s (x20 over 9m58s)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m35s (x21 over 9m58s)  kubelet            Back-off pulling image "docker.io/mysql:5.7"
functional_test.go:1804: (dbg) Run:  kubectl --context functional-345567 logs mysql-5bb876957f-drk25 -n default
functional_test.go:1804: (dbg) Non-zero exit: kubectl --context functional-345567 logs mysql-5bb876957f-drk25 -n default: exit status 1 (72.983195ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "mysql" in pod "mysql-5bb876957f-drk25" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1804: kubectl --context functional-345567 logs mysql-5bb876957f-drk25 -n default: exit status 1
functional_test.go:1806: failed waiting for mysql pod: app=mysql within 10m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/MySQL]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-345567 -n functional-345567
helpers_test.go:252: <<< TestFunctional/parallel/MySQL FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/MySQL]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-345567 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-345567 logs -n 25: (1.140257684s)
helpers_test.go:260: TestFunctional/parallel/MySQL logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                          ARGS                                                                           │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-345567 ssh findmnt -T /mount2                                                                                                                │ functional-345567 │ jenkins │ v1.37.0 │ 29 Sep 25 11:52 UTC │ 29 Sep 25 11:52 UTC │
	│ ssh            │ functional-345567 ssh findmnt -T /mount3                                                                                                                │ functional-345567 │ jenkins │ v1.37.0 │ 29 Sep 25 11:52 UTC │ 29 Sep 25 11:52 UTC │
	│ mount          │ -p functional-345567 --kill=true                                                                                                                        │ functional-345567 │ jenkins │ v1.37.0 │ 29 Sep 25 11:52 UTC │                     │
	│ image          │ functional-345567 image load --daemon kicbase/echo-server:functional-345567 --alsologtostderr                                                           │ functional-345567 │ jenkins │ v1.37.0 │ 29 Sep 25 11:52 UTC │ 29 Sep 25 11:53 UTC │
	│ image          │ functional-345567 image ls                                                                                                                              │ functional-345567 │ jenkins │ v1.37.0 │ 29 Sep 25 11:53 UTC │ 29 Sep 25 11:53 UTC │
	│ image          │ functional-345567 image load --daemon kicbase/echo-server:functional-345567 --alsologtostderr                                                           │ functional-345567 │ jenkins │ v1.37.0 │ 29 Sep 25 11:53 UTC │ 29 Sep 25 11:53 UTC │
	│ image          │ functional-345567 image ls                                                                                                                              │ functional-345567 │ jenkins │ v1.37.0 │ 29 Sep 25 11:53 UTC │ 29 Sep 25 11:53 UTC │
	│ image          │ functional-345567 image load --daemon kicbase/echo-server:functional-345567 --alsologtostderr                                                           │ functional-345567 │ jenkins │ v1.37.0 │ 29 Sep 25 11:53 UTC │ 29 Sep 25 11:53 UTC │
	│ image          │ functional-345567 image ls                                                                                                                              │ functional-345567 │ jenkins │ v1.37.0 │ 29 Sep 25 11:53 UTC │ 29 Sep 25 11:53 UTC │
	│ image          │ functional-345567 image save kicbase/echo-server:functional-345567 /home/jenkins/workspace/KVM_Linux_integration/echo-server-save.tar --alsologtostderr │ functional-345567 │ jenkins │ v1.37.0 │ 29 Sep 25 11:53 UTC │ 29 Sep 25 11:53 UTC │
	│ image          │ functional-345567 image rm kicbase/echo-server:functional-345567 --alsologtostderr                                                                      │ functional-345567 │ jenkins │ v1.37.0 │ 29 Sep 25 11:53 UTC │ 29 Sep 25 11:53 UTC │
	│ image          │ functional-345567 image ls                                                                                                                              │ functional-345567 │ jenkins │ v1.37.0 │ 29 Sep 25 11:53 UTC │ 29 Sep 25 11:53 UTC │
	│ image          │ functional-345567 image load /home/jenkins/workspace/KVM_Linux_integration/echo-server-save.tar --alsologtostderr                                       │ functional-345567 │ jenkins │ v1.37.0 │ 29 Sep 25 11:53 UTC │ 29 Sep 25 11:53 UTC │
	│ image          │ functional-345567 image ls                                                                                                                              │ functional-345567 │ jenkins │ v1.37.0 │ 29 Sep 25 11:53 UTC │ 29 Sep 25 11:53 UTC │
	│ image          │ functional-345567 image save --daemon kicbase/echo-server:functional-345567 --alsologtostderr                                                           │ functional-345567 │ jenkins │ v1.37.0 │ 29 Sep 25 11:53 UTC │ 29 Sep 25 11:53 UTC │
	│ update-context │ functional-345567 update-context --alsologtostderr -v=2                                                                                                 │ functional-345567 │ jenkins │ v1.37.0 │ 29 Sep 25 11:53 UTC │ 29 Sep 25 11:53 UTC │
	│ update-context │ functional-345567 update-context --alsologtostderr -v=2                                                                                                 │ functional-345567 │ jenkins │ v1.37.0 │ 29 Sep 25 11:53 UTC │ 29 Sep 25 11:53 UTC │
	│ update-context │ functional-345567 update-context --alsologtostderr -v=2                                                                                                 │ functional-345567 │ jenkins │ v1.37.0 │ 29 Sep 25 11:53 UTC │ 29 Sep 25 11:53 UTC │
	│ image          │ functional-345567 image ls --format short --alsologtostderr                                                                                             │ functional-345567 │ jenkins │ v1.37.0 │ 29 Sep 25 11:53 UTC │ 29 Sep 25 11:53 UTC │
	│ image          │ functional-345567 image ls --format yaml --alsologtostderr                                                                                              │ functional-345567 │ jenkins │ v1.37.0 │ 29 Sep 25 11:53 UTC │ 29 Sep 25 11:53 UTC │
	│ ssh            │ functional-345567 ssh pgrep buildkitd                                                                                                                   │ functional-345567 │ jenkins │ v1.37.0 │ 29 Sep 25 11:53 UTC │                     │
	│ image          │ functional-345567 image build -t localhost/my-image:functional-345567 testdata/build --alsologtostderr                                                  │ functional-345567 │ jenkins │ v1.37.0 │ 29 Sep 25 11:53 UTC │ 29 Sep 25 11:53 UTC │
	│ image          │ functional-345567 image ls                                                                                                                              │ functional-345567 │ jenkins │ v1.37.0 │ 29 Sep 25 11:53 UTC │ 29 Sep 25 11:53 UTC │
	│ image          │ functional-345567 image ls --format json --alsologtostderr                                                                                              │ functional-345567 │ jenkins │ v1.37.0 │ 29 Sep 25 11:53 UTC │ 29 Sep 25 11:53 UTC │
	│ image          │ functional-345567 image ls --format table --alsologtostderr                                                                                             │ functional-345567 │ jenkins │ v1.37.0 │ 29 Sep 25 11:53 UTC │ 29 Sep 25 11:53 UTC │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/29 11:52:54
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0929 11:52:54.886007  608412 out.go:360] Setting OutFile to fd 1 ...
	I0929 11:52:54.886275  608412 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 11:52:54.886285  608412 out.go:374] Setting ErrFile to fd 2...
	I0929 11:52:54.886290  608412 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 11:52:54.886575  608412 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21654-591397/.minikube/bin
	I0929 11:52:54.887080  608412 out.go:368] Setting JSON to false
	I0929 11:52:54.888152  608412 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":5723,"bootTime":1759141052,"procs":254,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0929 11:52:54.888257  608412 start.go:140] virtualization: kvm guest
	I0929 11:52:54.890356  608412 out.go:179] * [functional-345567] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I0929 11:52:54.891776  608412 notify.go:220] Checking for updates...
	I0929 11:52:54.891846  608412 out.go:179]   - MINIKUBE_LOCATION=21654
	I0929 11:52:54.893445  608412 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0929 11:52:54.894736  608412 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21654-591397/kubeconfig
	I0929 11:52:54.896027  608412 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21654-591397/.minikube
	I0929 11:52:54.897194  608412 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0929 11:52:54.898527  608412 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0929 11:52:54.901462  608412 config.go:182] Loaded profile config "functional-345567": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0929 11:52:54.902092  608412 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:52:54.902190  608412 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:52:54.918838  608412 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34729
	I0929 11:52:54.919337  608412 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:52:54.919911  608412 main.go:141] libmachine: Using API Version  1
	I0929 11:52:54.919942  608412 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:52:54.920387  608412 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:52:54.920611  608412 main.go:141] libmachine: (functional-345567) Calling .DriverName
	I0929 11:52:54.920900  608412 driver.go:421] Setting default libvirt URI to qemu:///system
	I0929 11:52:54.921299  608412 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:52:54.921348  608412 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:52:54.936850  608412 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39265
	I0929 11:52:54.937516  608412 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:52:54.938257  608412 main.go:141] libmachine: Using API Version  1
	I0929 11:52:54.938293  608412 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:52:54.938784  608412 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:52:54.939026  608412 main.go:141] libmachine: (functional-345567) Calling .DriverName
	I0929 11:52:54.980510  608412 out.go:179] * Utilisation du pilote kvm2 basé sur le profil existant
	I0929 11:52:54.981656  608412 start.go:304] selected driver: kvm2
	I0929 11:52:54.981676  608412 start.go:924] validating driver "kvm2" against &{Name:functional-345567 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 Clu
sterName:functional-345567 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.165 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVe
rsion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 11:52:54.981806  608412 start.go:935] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0929 11:52:54.984075  608412 out.go:203] 
	W0929 11:52:54.986131  608412 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0929 11:52:54.987384  608412 out.go:203] 
	
	
	==> Docker <==
	Sep 29 11:53:37 functional-345567 dockerd[8452]: time="2025-09-29T11:53:37.828468447Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Sep 29 11:53:37 functional-345567 dockerd[8452]: time="2025-09-29T11:53:37.866527290Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 11:53:38 functional-345567 dockerd[8452]: time="2025-09-29T11:53:38.831836323Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 11:53:38 functional-345567 dockerd[8452]: time="2025-09-29T11:53:38.940708907Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 11:54:18 functional-345567 dockerd[8452]: time="2025-09-29T11:54:18.737064767Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Sep 29 11:54:18 functional-345567 dockerd[8452]: time="2025-09-29T11:54:18.847775644Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 11:54:18 functional-345567 cri-dockerd[9434]: time="2025-09-29T11:54:18Z" level=info msg="Stop pulling image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: Pulling from kubernetesui/dashboard"
	Sep 29 11:54:29 functional-345567 dockerd[8452]: time="2025-09-29T11:54:29.742977658Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Sep 29 11:54:29 functional-345567 dockerd[8452]: time="2025-09-29T11:54:29.789718835Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 11:54:30 functional-345567 dockerd[8452]: time="2025-09-29T11:54:30.812677558Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 11:54:31 functional-345567 dockerd[8452]: time="2025-09-29T11:54:31.810597682Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 11:55:50 functional-345567 dockerd[8452]: time="2025-09-29T11:55:50.739945495Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Sep 29 11:55:50 functional-345567 dockerd[8452]: time="2025-09-29T11:55:50.784699281Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 11:55:55 functional-345567 dockerd[8452]: time="2025-09-29T11:55:55.734875184Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Sep 29 11:55:55 functional-345567 dockerd[8452]: time="2025-09-29T11:55:55.777484806Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 11:56:01 functional-345567 dockerd[8452]: time="2025-09-29T11:56:01.838836743Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 11:56:04 functional-345567 dockerd[8452]: time="2025-09-29T11:56:04.829637369Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 11:58:31 functional-345567 dockerd[8452]: time="2025-09-29T11:58:31.739347822Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Sep 29 11:58:31 functional-345567 dockerd[8452]: time="2025-09-29T11:58:31.841406041Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 11:58:31 functional-345567 cri-dockerd[9434]: time="2025-09-29T11:58:31Z" level=info msg="Stop pulling image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: Pulling from kubernetesui/dashboard"
	Sep 29 11:58:38 functional-345567 dockerd[8452]: time="2025-09-29T11:58:38.732849896Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Sep 29 11:58:38 functional-345567 dockerd[8452]: time="2025-09-29T11:58:38.777079011Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 11:58:48 functional-345567 dockerd[8452]: time="2025-09-29T11:58:48.816028647Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 11:58:52 functional-345567 dockerd[8452]: time="2025-09-29T11:58:52.884013029Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 11:58:52 functional-345567 cri-dockerd[9434]: time="2025-09-29T11:58:52Z" level=info msg="Stop pulling image docker.io/nginx:latest: latest: Pulling from library/nginx"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	ce94dd62b125d       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   10 minutes ago      Exited              mount-munger              0                   8d7f0bfdf9cfb       busybox-mount
	4be86a79d09d0       kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6           10 minutes ago      Running             echo-server               0                   947dcb252dd05       hello-node-connect-7d85dfc575-lrm8c
	71dea8f862b81       kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6           10 minutes ago      Running             echo-server               0                   72daa0283c9ef       hello-node-75c85bcc94-xr87t
	50a0da838737e       52546a367cc9e                                                                                         10 minutes ago      Running             coredns                   5                   4049429ce4236       coredns-66bc5c9577-xk7nm
	411825215d27e       6e38f40d628db                                                                                         10 minutes ago      Running             storage-provisioner       4                   563027a986f3d       storage-provisioner
	b32b19fdb12c8       df0860106674d                                                                                         10 minutes ago      Running             kube-proxy                4                   22518e2355969       kube-proxy-2fqpd
	90204288ee92a       90550c43ad2bc                                                                                         10 minutes ago      Running             kube-apiserver            0                   23505abfd486b       kube-apiserver-functional-345567
	ea4a881719e9b       a0af72f2ec6d6                                                                                         10 minutes ago      Running             kube-controller-manager   4                   6f92ee8d98831       kube-controller-manager-functional-345567
	2722204aec368       52546a367cc9e                                                                                         10 minutes ago      Running             coredns                   3                   c741d758f049d       coredns-66bc5c9577-mjdq6
	3e5e6adba4ebb       5f1f5298c888d                                                                                         10 minutes ago      Running             etcd                      3                   afdcfcc8dc192       etcd-functional-345567
	bd144e0b1825e       46169d968e920                                                                                         10 minutes ago      Running             kube-scheduler            4                   e9341f8a976df       kube-scheduler-functional-345567
	d71ede638e4d2       52546a367cc9e                                                                                         10 minutes ago      Exited              coredns                   4                   30b0133e782e5       coredns-66bc5c9577-xk7nm
	74103287cd23b       46169d968e920                                                                                         10 minutes ago      Exited              kube-scheduler            3                   4ec944c1a51c9       kube-scheduler-functional-345567
	7d3198e132f24       a0af72f2ec6d6                                                                                         10 minutes ago      Exited              kube-controller-manager   3                   86172d5e4ed87       kube-controller-manager-functional-345567
	4441c251624ac       6e38f40d628db                                                                                         10 minutes ago      Exited              storage-provisioner       3                   8deae7d11ce0d       storage-provisioner
	d17e345f5764f       df0860106674d                                                                                         10 minutes ago      Exited              kube-proxy                3                   9c5baa8d8ef07       kube-proxy-2fqpd
	ed7eee2023740       52546a367cc9e                                                                                         11 minutes ago      Exited              coredns                   2                   a5aae7bb8491c       coredns-66bc5c9577-mjdq6
	976b2c11ea333       5f1f5298c888d                                                                                         11 minutes ago      Exited              etcd                      2                   ceb47995d56fe       etcd-functional-345567
	
	
	==> coredns [2722204aec36] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:58746 - 55120 "HINFO IN 7801123286633978322.7662679228127234237. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.036452724s
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [50a0da838737] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:38522 - 50589 "HINFO IN 7299539350853405645.7723234432700575792. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.020792926s
	
	
	==> coredns [d71ede638e4d] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:53869 - 12162 "HINFO IN 3342720793649580752.1000981730392323068. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.417393383s
	
	
	==> coredns [ed7eee202374] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:36287 - 17292 "HINFO IN 3402359101510948574.979122807022581316. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.023865412s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-345567
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-345567
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b8c533eb3dce8338d3d4e7231ea97d1c44ed6f81
	                    minikube.k8s.io/name=functional-345567
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_29T11_50_06_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Sep 2025 11:50:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-345567
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Sep 2025 12:02:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Sep 2025 11:58:40 +0000   Mon, 29 Sep 2025 11:49:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Sep 2025 11:58:40 +0000   Mon, 29 Sep 2025 11:49:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Sep 2025 11:58:40 +0000   Mon, 29 Sep 2025 11:49:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Sep 2025 11:58:40 +0000   Mon, 29 Sep 2025 11:50:11 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.165
	  Hostname:    functional-345567
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008596Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008596Ki
	  pods:               110
	System Info:
	  Machine ID:                 8559d0fdeb664bce82856171ffe07f7f
	  System UUID:                8559d0fd-eb66-4bce-8285-6171ffe07f7f
	  Boot ID:                    fc84bad7-00d6-47c0-8939-3febc52a0433
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://28.4.0
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-xr87t                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     hello-node-connect-7d85dfc575-lrm8c           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     mysql-5bb876957f-drk25                        600m (30%)    700m (35%)  512Mi (13%)      700Mi (17%)    10m
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 coredns-66bc5c9577-mjdq6                      100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     12m
	  kube-system                 coredns-66bc5c9577-xk7nm                      100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     12m
	  kube-system                 etcd-functional-345567                        100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         12m
	  kube-system                 kube-apiserver-functional-345567              250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-functional-345567     200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-2fqpd                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-functional-345567              100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-ltcz6    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-jjzsx         0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1450m (72%)  700m (35%)
	  memory             752Mi (19%)  1040Mi (26%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 12m                kube-proxy       
	  Normal  Starting                 10m                kube-proxy       
	  Normal  Starting                 11m                kube-proxy       
	  Normal  NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    12m (x8 over 12m)  kubelet          Node functional-345567 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x7 over 12m)  kubelet          Node functional-345567 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  12m (x8 over 12m)  kubelet          Node functional-345567 status is now: NodeHasSufficientMemory
	  Normal  Starting                 12m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  12m                kubelet          Node functional-345567 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m                kubelet          Node functional-345567 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m                kubelet          Node functional-345567 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           12m                node-controller  Node functional-345567 event: Registered Node functional-345567 in Controller
	  Normal  NodeReady                12m                kubelet          Node functional-345567 status is now: NodeReady
	  Normal  Starting                 11m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node functional-345567 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node functional-345567 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11m (x7 over 11m)  kubelet          Node functional-345567 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           11m                node-controller  Node functional-345567 event: Registered Node functional-345567 in Controller
	  Normal  Starting                 10m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node functional-345567 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node functional-345567 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x7 over 10m)  kubelet          Node functional-345567 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           10m                node-controller  Node functional-345567 event: Registered Node functional-345567 in Controller
	
	
	==> dmesg <==
	[  +0.108237] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.116386] kauditd_printk_skb: 373 callbacks suppressed
	[  +0.102475] kauditd_printk_skb: 205 callbacks suppressed
	[Sep29 11:50] kauditd_printk_skb: 165 callbacks suppressed
	[  +1.045902] kauditd_printk_skb: 19 callbacks suppressed
	[  +6.979507] kauditd_printk_skb: 270 callbacks suppressed
	[  +0.189723] kauditd_printk_skb: 2 callbacks suppressed
	[ +20.191182] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.446606] kauditd_printk_skb: 22 callbacks suppressed
	[Sep29 11:51] kauditd_printk_skb: 515 callbacks suppressed
	[  +0.000045] kauditd_printk_skb: 106 callbacks suppressed
	[  +4.857046] kauditd_printk_skb: 111 callbacks suppressed
	[  +7.559965] kauditd_printk_skb: 98 callbacks suppressed
	[ +15.196663] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.479035] kauditd_printk_skb: 22 callbacks suppressed
	[Sep29 11:52] kauditd_printk_skb: 470 callbacks suppressed
	[  +0.000025] kauditd_printk_skb: 178 callbacks suppressed
	[  +4.255392] kauditd_printk_skb: 66 callbacks suppressed
	[  +6.794943] kauditd_printk_skb: 84 callbacks suppressed
	[  +4.437328] kauditd_printk_skb: 2 callbacks suppressed
	[  +1.623177] kauditd_printk_skb: 91 callbacks suppressed
	[  +1.873208] kauditd_printk_skb: 146 callbacks suppressed
	[  +2.698217] kauditd_printk_skb: 79 callbacks suppressed
	[Sep29 11:53] crun[14706]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set
	[  +0.000093] kauditd_printk_skb: 104 callbacks suppressed
	
	
	==> etcd [3e5e6adba4eb] <==
	{"level":"warn","ts":"2025-09-29T11:52:21.847879Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55210","server-name":"","error":"read tcp 127.0.0.1:2379->127.0.0.1:55210: read: connection reset by peer"}
	{"level":"warn","ts":"2025-09-29T11:52:21.870351Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55228","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:52:21.881434Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55246","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:52:21.896048Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55264","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:52:21.918341Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55290","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:52:21.930053Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55308","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:52:21.941528Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55352","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:52:21.952839Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55354","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:52:21.972455Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55382","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:52:21.985991Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55398","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:52:22.009906Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55408","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:52:22.016610Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55416","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:52:22.027497Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55430","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:52:22.039261Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55434","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:52:22.062312Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55454","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:52:22.072788Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55458","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:52:22.100425Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55466","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:52:22.135296Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55490","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:52:22.147547Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55508","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:52:22.169177Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55530","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:52:22.180777Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55546","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:52:22.247522Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55566","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-29T12:02:21.210381Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1325}
	{"level":"info","ts":"2025-09-29T12:02:21.235979Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1325,"took":"24.660901ms","hash":1907160394,"current-db-size-bytes":3792896,"current-db-size":"3.8 MB","current-db-size-in-use-bytes":1941504,"current-db-size-in-use":"1.9 MB"}
	{"level":"info","ts":"2025-09-29T12:02:21.236053Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":1907160394,"revision":1325,"compact-revision":-1}
	
	
	==> etcd [976b2c11ea33] <==
	{"level":"warn","ts":"2025-09-29T11:51:16.188492Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43472","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:51:16.199134Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43484","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:51:16.226491Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43504","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:51:16.235921Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43516","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:51:16.247338Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43532","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:51:16.254037Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43556","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:51:16.305662Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43568","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-29T11:51:53.117016Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-09-29T11:51:53.117110Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-345567","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.165:2380"],"advertise-client-urls":["https://192.168.39.165:2379"]}
	{"level":"error","ts":"2025-09-29T11:51:53.117211Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-09-29T11:51:53.119444Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-09-29T11:52:00.125423Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-29T11:52:00.125528Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"ffc3b7517aaad9f6","current-leader-member-id":"ffc3b7517aaad9f6"}
	{"level":"info","ts":"2025-09-29T11:52:00.127672Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-09-29T11:52:00.127714Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-09-29T11:52:00.129571Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-29T11:52:00.129658Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-09-29T11:52:00.129668Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-09-29T11:52:00.129704Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.165:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-29T11:52:00.130178Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.165:2379: use of closed network connection"}
	{"level":"error","ts":"2025-09-29T11:52:00.130318Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.165:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-29T11:52:00.133561Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.39.165:2380"}
	{"level":"error","ts":"2025-09-29T11:52:00.133635Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.165:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-29T11:52:00.133811Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.39.165:2380"}
	{"level":"info","ts":"2025-09-29T11:52:00.133909Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-345567","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.165:2380"],"advertise-client-urls":["https://192.168.39.165:2379"]}
	
	
	==> kernel <==
	 12:02:57 up 13 min,  0 users,  load average: 0.27, 0.34, 0.34
	Linux functional-345567 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Sep 18 15:48:18 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [90204288ee92] <==
	I0929 11:52:26.648056       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0929 11:52:40.519895       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.96.248.96"}
	I0929 11:52:45.183111       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.111.97.77"}
	I0929 11:52:46.101049       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.102.121.36"}
	I0929 11:52:56.142766       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.99.90.196"}
	I0929 11:52:56.718813       1 controller.go:667] quota admission added evaluator for: namespaces
	I0929 11:52:57.321973       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.105.24.8"}
	I0929 11:52:57.354836       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.100.5.50"}
	I0929 11:53:29.416192       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:53:51.223561       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:54:45.491383       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:55:09.464467       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:55:54.861137       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:56:17.689709       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:57:17.511932       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:57:27.615807       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:58:23.497988       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:58:49.198750       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:59:32.631331       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 12:00:01.616641       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 12:00:40.745618       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 12:01:09.802380       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 12:02:09.150469       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 12:02:22.950175       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0929 12:02:36.777988       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	
	
	==> kube-controller-manager [7d3198e132f2] <==
	I0929 11:52:08.174197       1 serving.go:386] Generated self-signed cert in-memory
	
	
	==> kube-controller-manager [ea4a881719e9] <==
	I0929 11:52:26.249165       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-345567"
	I0929 11:52:26.249423       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I0929 11:52:26.245940       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I0929 11:52:26.250150       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0929 11:52:26.250635       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I0929 11:52:26.252617       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I0929 11:52:26.258588       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I0929 11:52:26.260291       1 shared_informer.go:356] "Caches are synced" controller="job"
	I0929 11:52:26.267491       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I0929 11:52:26.271425       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I0929 11:52:26.271617       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I0929 11:52:26.323832       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0929 11:52:26.345201       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0929 11:52:26.345283       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0929 11:52:26.345291       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	E0929 11:52:56.979172       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0929 11:52:56.991484       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0929 11:52:57.014127       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0929 11:52:57.015058       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0929 11:52:57.035545       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0929 11:52:57.047483       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0929 11:52:57.072110       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0929 11:52:57.073964       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0929 11:52:57.094822       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0929 11:52:57.095756       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-proxy [b32b19fdb12c] <==
	I0929 11:52:24.634935       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0929 11:52:24.735066       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0929 11:52:24.735134       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.165"]
	E0929 11:52:24.735202       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0929 11:52:24.865013       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I0929 11:52:24.865063       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0929 11:52:24.865086       1 server_linux.go:132] "Using iptables Proxier"
	I0929 11:52:24.938859       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0929 11:52:24.939791       1 server.go:527] "Version info" version="v1.34.0"
	I0929 11:52:24.939807       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 11:52:24.951143       1 config.go:200] "Starting service config controller"
	I0929 11:52:24.951508       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0929 11:52:24.951756       1 config.go:106] "Starting endpoint slice config controller"
	I0929 11:52:24.951874       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0929 11:52:24.952014       1 config.go:403] "Starting serviceCIDR config controller"
	I0929 11:52:24.952140       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0929 11:52:24.955349       1 config.go:309] "Starting node config controller"
	I0929 11:52:24.955671       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0929 11:52:24.955939       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0929 11:52:25.051884       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0929 11:52:25.056127       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0929 11:52:25.056143       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [d17e345f5764] <==
	I0929 11:52:06.589363       1 server_linux.go:53] "Using iptables proxy"
	I0929 11:52:06.688192       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	E0929 11:52:06.689895       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-345567&limit=500&resourceVersion=0\": dial tcp 192.168.39.165:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0929 11:52:08.135725       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-345567&limit=500&resourceVersion=0\": dial tcp 192.168.39.165:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	
	
	==> kube-scheduler [74103287cd23] <==
	I0929 11:52:08.685017       1 serving.go:386] Generated self-signed cert in-memory
	
	
	==> kube-scheduler [bd144e0b1825] <==
	E0929 11:52:15.295690       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: Get \"https://192.168.39.165:8441/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.39.165:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0929 11:52:15.396950       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: Get \"https://192.168.39.165:8441/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.39.165:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0929 11:52:15.504843       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: Get \"https://192.168.39.165:8441/apis/resource.k8s.io/v1/resourceslices?limit=500&resourceVersion=0\": dial tcp 192.168.39.165:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E0929 11:52:15.650564       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: Get \"https://192.168.39.165:8441/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.39.165:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0929 11:52:15.686797       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.39.165:8441/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.39.165:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0929 11:52:18.377371       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: Get \"https://192.168.39.165:8441/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.39.165:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0929 11:52:18.816578       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: Get \"https://192.168.39.165:8441/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.39.165:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0929 11:52:19.067097       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: Get \"https://192.168.39.165:8441/apis/storage.k8s.io/v1/volumeattachments?limit=500&resourceVersion=0\": dial tcp 192.168.39.165:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0929 11:52:19.370328       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: Get \"https://192.168.39.165:8441/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.39.165:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0929 11:52:19.519200       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.39.165:8441/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.39.165:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0929 11:52:19.683466       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: Get \"https://192.168.39.165:8441/apis/resource.k8s.io/v1/deviceclasses?limit=500&resourceVersion=0\": dial tcp 192.168.39.165:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E0929 11:52:19.946451       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: Get \"https://192.168.39.165:8441/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.39.165:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0929 11:52:19.952321       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: Get \"https://192.168.39.165:8441/apis/resource.k8s.io/v1/resourceclaims?limit=500&resourceVersion=0\": dial tcp 192.168.39.165:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E0929 11:52:20.058766       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: Get \"https://192.168.39.165:8441/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.39.165:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0929 11:52:20.136061       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.39.165:8441/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.39.165:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0929 11:52:20.510461       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.39.165:8441/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.39.165:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0929 11:52:22.918741       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E0929 11:52:22.918756       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0929 11:52:22.918912       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0929 11:52:22.919195       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0929 11:52:22.920003       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0929 11:52:22.921510       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0929 11:52:22.923086       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E0929 11:52:22.923604       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	I0929 11:52:30.132408       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 29 12:01:38 functional-345567 kubelet[11641]: E0929 12:01:38.708481   11641 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="bf7fbbe7-176a-4572-843e-5c7514e63c62"
	Sep 29 12:01:39 functional-345567 kubelet[11641]: E0929 12:01:39.711304   11641 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-drk25" podUID="ca84faec-7fe2-411c-964f-571eda825801"
	Sep 29 12:01:43 functional-345567 kubelet[11641]: E0929 12:01:43.712048   11641 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-ltcz6" podUID="48ff1da7-2b29-4e1a-a131-015577223249"
	Sep 29 12:01:45 functional-345567 kubelet[11641]: E0929 12:01:45.710694   11641 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-jjzsx" podUID="5644d278-59ff-422e-baa9-b14a5238ef8f"
	Sep 29 12:01:52 functional-345567 kubelet[11641]: E0929 12:01:52.713091   11641 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-drk25" podUID="ca84faec-7fe2-411c-964f-571eda825801"
	Sep 29 12:01:53 functional-345567 kubelet[11641]: E0929 12:01:53.708693   11641 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="bf7fbbe7-176a-4572-843e-5c7514e63c62"
	Sep 29 12:01:58 functional-345567 kubelet[11641]: E0929 12:01:58.709908   11641 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-ltcz6" podUID="48ff1da7-2b29-4e1a-a131-015577223249"
	Sep 29 12:02:00 functional-345567 kubelet[11641]: E0929 12:02:00.710142   11641 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-jjzsx" podUID="5644d278-59ff-422e-baa9-b14a5238ef8f"
	Sep 29 12:02:05 functional-345567 kubelet[11641]: E0929 12:02:05.711299   11641 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-drk25" podUID="ca84faec-7fe2-411c-964f-571eda825801"
	Sep 29 12:02:07 functional-345567 kubelet[11641]: E0929 12:02:07.708132   11641 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="bf7fbbe7-176a-4572-843e-5c7514e63c62"
	Sep 29 12:02:09 functional-345567 kubelet[11641]: E0929 12:02:09.712266   11641 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-ltcz6" podUID="48ff1da7-2b29-4e1a-a131-015577223249"
	Sep 29 12:02:15 functional-345567 kubelet[11641]: E0929 12:02:15.713563   11641 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-jjzsx" podUID="5644d278-59ff-422e-baa9-b14a5238ef8f"
	Sep 29 12:02:19 functional-345567 kubelet[11641]: E0929 12:02:19.727077   11641 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="bf7fbbe7-176a-4572-843e-5c7514e63c62"
	Sep 29 12:02:20 functional-345567 kubelet[11641]: E0929 12:02:20.710208   11641 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-drk25" podUID="ca84faec-7fe2-411c-964f-571eda825801"
	Sep 29 12:02:23 functional-345567 kubelet[11641]: E0929 12:02:23.711068   11641 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-ltcz6" podUID="48ff1da7-2b29-4e1a-a131-015577223249"
	Sep 29 12:02:27 functional-345567 kubelet[11641]: E0929 12:02:27.710505   11641 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-jjzsx" podUID="5644d278-59ff-422e-baa9-b14a5238ef8f"
	Sep 29 12:02:30 functional-345567 kubelet[11641]: E0929 12:02:30.707779   11641 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="bf7fbbe7-176a-4572-843e-5c7514e63c62"
	Sep 29 12:02:31 functional-345567 kubelet[11641]: E0929 12:02:31.710065   11641 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-drk25" podUID="ca84faec-7fe2-411c-964f-571eda825801"
	Sep 29 12:02:38 functional-345567 kubelet[11641]: E0929 12:02:38.710986   11641 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-jjzsx" podUID="5644d278-59ff-422e-baa9-b14a5238ef8f"
	Sep 29 12:02:38 functional-345567 kubelet[11641]: E0929 12:02:38.711627   11641 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-ltcz6" podUID="48ff1da7-2b29-4e1a-a131-015577223249"
	Sep 29 12:02:43 functional-345567 kubelet[11641]: E0929 12:02:43.707905   11641 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="bf7fbbe7-176a-4572-843e-5c7514e63c62"
	Sep 29 12:02:46 functional-345567 kubelet[11641]: E0929 12:02:46.710415   11641 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-drk25" podUID="ca84faec-7fe2-411c-964f-571eda825801"
	Sep 29 12:02:50 functional-345567 kubelet[11641]: E0929 12:02:50.710429   11641 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-jjzsx" podUID="5644d278-59ff-422e-baa9-b14a5238ef8f"
	Sep 29 12:02:51 functional-345567 kubelet[11641]: E0929 12:02:51.710375   11641 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-ltcz6" podUID="48ff1da7-2b29-4e1a-a131-015577223249"
	Sep 29 12:02:56 functional-345567 kubelet[11641]: E0929 12:02:56.708454   11641 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="bf7fbbe7-176a-4572-843e-5c7514e63c62"
	
	
	==> storage-provisioner [411825215d27] <==
	W0929 12:02:33.568190       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:02:35.572207       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:02:35.582470       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:02:37.586617       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:02:37.593613       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:02:39.599449       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:02:39.608806       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:02:41.612475       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:02:41.618660       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:02:43.622940       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:02:43.628471       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:02:45.632975       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:02:45.641622       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:02:47.646107       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:02:47.654157       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:02:49.658819       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:02:49.664964       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:02:51.668649       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:02:51.674937       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:02:53.678604       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:02:53.687300       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:02:55.691321       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:02:55.696928       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:02:57.701737       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:02:57.716689       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [4441c251624a] <==
	I0929 11:52:06.520774       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0929 11:52:06.525886       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-345567 -n functional-345567
helpers_test.go:269: (dbg) Run:  kubectl --context functional-345567 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount mysql-5bb876957f-drk25 sp-pod dashboard-metrics-scraper-77bf4d6c4c-ltcz6 kubernetes-dashboard-855c9754f9-jjzsx
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/MySQL]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-345567 describe pod busybox-mount mysql-5bb876957f-drk25 sp-pod dashboard-metrics-scraper-77bf4d6c4c-ltcz6 kubernetes-dashboard-855c9754f9-jjzsx
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context functional-345567 describe pod busybox-mount mysql-5bb876957f-drk25 sp-pod dashboard-metrics-scraper-77bf4d6c4c-ltcz6 kubernetes-dashboard-855c9754f9-jjzsx: exit status 1 (90.286547ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-345567/192.168.39.165
	Start Time:       Mon, 29 Sep 2025 11:52:49 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.15
	IPs:
	  IP:  10.244.0.15
	Containers:
	  mount-munger:
	    Container ID:  docker://ce94dd62b125d678401e70ac6f390e8514578a04c5df6bb8e5cd2d1e8ec1c46f
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      docker-pullable://gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Mon, 29 Sep 2025 11:52:51 +0000
	      Finished:     Mon, 29 Sep 2025 11:52:51 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-jclnd (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-jclnd:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  10m   default-scheduler  Successfully assigned default/busybox-mount to functional-345567
	  Normal  Pulling    10m   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     10m   kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.578s (1.578s including waiting). Image size: 4403845 bytes.
	  Normal  Created    10m   kubelet            Created container: mount-munger
	  Normal  Started    10m   kubelet            Started container mount-munger
	
	
	Name:             mysql-5bb876957f-drk25
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-345567/192.168.39.165
	Start Time:       Mon, 29 Sep 2025 11:52:56 +0000
	Labels:           app=mysql
	                  pod-template-hash=5bb876957f
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.17
	IPs:
	  IP:           10.244.0.17
	Controlled By:  ReplicaSet/mysql-5bb876957f
	Containers:
	  mysql:
	    Container ID:   
	    Image:          docker.io/mysql:5.7
	    Image ID:       
	    Port:           3306/TCP (mysql)
	    Host Port:      0/TCP (mysql)
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-qv62j (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-qv62j:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/mysql-5bb876957f-drk25 to functional-345567
	  Normal   Pulling    6m54s (x5 over 10m)   kubelet            Pulling image "docker.io/mysql:5.7"
	  Warning  Failed     6m54s (x5 over 10m)   kubelet            Failed to pull image "docker.io/mysql:5.7": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     6m54s (x5 over 10m)   kubelet            Error: ErrImagePull
	  Warning  Failed     4m52s (x20 over 10m)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m37s (x21 over 10m)  kubelet            Back-off pulling image "docker.io/mysql:5.7"
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-345567/192.168.39.165
	Start Time:       Mon, 29 Sep 2025 11:52:53 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.16
	IPs:
	  IP:  10.244.0.16
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-t797q (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-t797q:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  10m                    default-scheduler  Successfully assigned default/sp-pod to functional-345567
	  Warning  Failed     10m                    kubelet            Failed to pull image "docker.io/nginx": toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    6m57s (x5 over 10m)    kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     6m57s (x5 over 10m)    kubelet            Error: ErrImagePull
	  Warning  Failed     6m57s (x4 over 9m49s)  kubelet            Failed to pull image "docker.io/nginx": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    2s (x43 over 10m)      kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     2s (x43 over 10m)      kubelet            Error: ImagePullBackOff

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-77bf4d6c4c-ltcz6" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-jjzsx" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context functional-345567 describe pod busybox-mount mysql-5bb876957f-drk25 sp-pod dashboard-metrics-scraper-77bf4d6c4c-ltcz6 kubernetes-dashboard-855c9754f9-jjzsx: exit status 1
--- FAIL: TestFunctional/parallel/MySQL (602.49s)

                                                
                                    

Test pass (303/345)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 6.92
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.07
9 TestDownloadOnly/v1.28.0/DeleteAll 0.15
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.34.0/json-events 3.9
13 TestDownloadOnly/v1.34.0/preload-exists 0
17 TestDownloadOnly/v1.34.0/LogsDuration 0.07
18 TestDownloadOnly/v1.34.0/DeleteAll 0.15
19 TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds 0.14
21 TestBinaryMirror 0.66
22 TestOffline 61.87
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
27 TestAddons/Setup 172.47
31 TestAddons/serial/GCPAuth/Namespaces 0.12
32 TestAddons/serial/GCPAuth/FakeCredentials 9.64
35 TestAddons/parallel/Registry 14.84
36 TestAddons/parallel/RegistryCreds 0.78
38 TestAddons/parallel/InspektorGadget 6.23
39 TestAddons/parallel/MetricsServer 7.52
42 TestAddons/parallel/Headlamp 18.79
43 TestAddons/parallel/CloudSpanner 5.56
45 TestAddons/parallel/NvidiaDevicePlugin 6.42
48 TestAddons/StoppedEnableDisable 13.52
49 TestCertOptions 67.01
50 TestCertExpiration 334.53
51 TestDockerFlags 82.92
52 TestForceSystemdFlag 65.24
53 TestForceSystemdEnv 54.11
55 TestKVMDriverInstallOrUpdate 0.62
59 TestErrorSpam/setup 43.92
60 TestErrorSpam/start 0.37
61 TestErrorSpam/status 0.82
62 TestErrorSpam/pause 1.46
63 TestErrorSpam/unpause 1.68
64 TestErrorSpam/stop 15.2
67 TestFunctional/serial/CopySyncFile 0
68 TestFunctional/serial/StartWithProxy 61.31
69 TestFunctional/serial/AuditLog 0
70 TestFunctional/serial/SoftStart 66.39
71 TestFunctional/serial/KubeContext 0.05
72 TestFunctional/serial/KubectlGetPods 0.12
75 TestFunctional/serial/CacheCmd/cache/add_remote 2.35
76 TestFunctional/serial/CacheCmd/cache/add_local 0.77
77 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
78 TestFunctional/serial/CacheCmd/cache/list 0.05
79 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.23
80 TestFunctional/serial/CacheCmd/cache/cache_reload 1.23
81 TestFunctional/serial/CacheCmd/cache/delete 0.11
82 TestFunctional/serial/MinikubeKubectlCmd 0.11
83 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
84 TestFunctional/serial/ExtraConfig 63.65
85 TestFunctional/serial/ComponentHealth 0.07
86 TestFunctional/serial/LogsCmd 1.13
87 TestFunctional/serial/LogsFileCmd 1.19
88 TestFunctional/serial/InvalidService 4.04
90 TestFunctional/parallel/ConfigCmd 0.36
92 TestFunctional/parallel/DryRun 0.34
93 TestFunctional/parallel/InternationalLanguage 0.16
94 TestFunctional/parallel/StatusCmd 1
98 TestFunctional/parallel/ServiceCmdConnect 8.6
99 TestFunctional/parallel/AddonsCmd 0.14
102 TestFunctional/parallel/SSHCmd 0.44
103 TestFunctional/parallel/CpCmd 1.54
105 TestFunctional/parallel/FileSync 0.25
106 TestFunctional/parallel/CertSync 1.52
110 TestFunctional/parallel/NodeLabels 0.06
112 TestFunctional/parallel/NonActiveRuntimeDisabled 0.22
114 TestFunctional/parallel/License 0.25
115 TestFunctional/parallel/DockerEnv/bash 0.97
116 TestFunctional/parallel/UpdateContextCmd/no_changes 0.09
117 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.09
118 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.1
119 TestFunctional/parallel/ServiceCmd/DeployApp 9.25
129 TestFunctional/parallel/ProfileCmd/profile_not_create 0.38
130 TestFunctional/parallel/ProfileCmd/profile_list 0.37
131 TestFunctional/parallel/ProfileCmd/profile_json_output 0.34
132 TestFunctional/parallel/MountCmd/any-port 7.39
133 TestFunctional/parallel/ServiceCmd/List 0.42
134 TestFunctional/parallel/ServiceCmd/JSONOutput 0.31
135 TestFunctional/parallel/ServiceCmd/HTTPS 0.36
136 TestFunctional/parallel/ServiceCmd/Format 0.35
137 TestFunctional/parallel/Version/short 0.05
138 TestFunctional/parallel/Version/components 0.5
139 TestFunctional/parallel/MountCmd/specific-port 2.02
140 TestFunctional/parallel/ServiceCmd/URL 0.35
141 TestFunctional/parallel/MountCmd/VerifyCleanup 1.53
142 TestFunctional/parallel/ImageCommands/ImageListShort 0.2
143 TestFunctional/parallel/ImageCommands/ImageListTable 0.22
144 TestFunctional/parallel/ImageCommands/ImageListJson 0.21
145 TestFunctional/parallel/ImageCommands/ImageListYaml 0.2
146 TestFunctional/parallel/ImageCommands/ImageBuild 2.92
147 TestFunctional/parallel/ImageCommands/Setup 0.39
148 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.01
149 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.74
150 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 0.88
151 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.34
152 TestFunctional/parallel/ImageCommands/ImageRemove 0.39
153 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.58
154 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.4
155 TestFunctional/delete_echo-server_images 0.04
156 TestFunctional/delete_my-image_image 0.02
157 TestFunctional/delete_minikube_cached_images 0.02
159 TestGvisorAddon 141.53
162 TestMultiControlPlane/serial/StartCluster 229.02
163 TestMultiControlPlane/serial/DeployApp 5.98
164 TestMultiControlPlane/serial/PingHostFromPods 1.41
165 TestMultiControlPlane/serial/AddWorkerNode 51.56
166 TestMultiControlPlane/serial/NodeLabels 0.07
167 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.96
168 TestMultiControlPlane/serial/CopyFile 13.99
169 TestMultiControlPlane/serial/StopSecondaryNode 15.54
170 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.72
171 TestMultiControlPlane/serial/RestartSecondaryNode 25.15
172 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.04
173 TestMultiControlPlane/serial/RestartClusterKeepsNodes 158.08
174 TestMultiControlPlane/serial/DeleteSecondaryNode 7.89
175 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.7
176 TestMultiControlPlane/serial/StopCluster 41
177 TestMultiControlPlane/serial/RestartCluster 134.04
178 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.69
179 TestMultiControlPlane/serial/AddSecondaryNode 88.6
180 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.94
183 TestImageBuild/serial/Setup 47.18
184 TestImageBuild/serial/NormalBuild 1.6
185 TestImageBuild/serial/BuildWithBuildArg 1.1
186 TestImageBuild/serial/BuildWithDockerIgnore 0.89
187 TestImageBuild/serial/BuildWithSpecifiedDockerfile 1.16
191 TestJSONOutput/start/Command 65.95
192 TestJSONOutput/start/Audit 0
194 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
197 TestJSONOutput/pause/Command 0.69
198 TestJSONOutput/pause/Audit 0
200 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
203 TestJSONOutput/unpause/Command 0.63
204 TestJSONOutput/unpause/Audit 0
206 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
207 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
209 TestJSONOutput/stop/Command 14.21
210 TestJSONOutput/stop/Audit 0
212 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
213 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
214 TestErrorJSONOutput 0.22
219 TestMainNoArgs 0.05
220 TestMinikubeProfile 94.38
223 TestMountStart/serial/StartWithMountFirst 24.53
224 TestMountStart/serial/VerifyMountFirst 0.39
225 TestMountStart/serial/StartWithMountSecond 22.56
226 TestMountStart/serial/VerifyMountSecond 0.38
227 TestMountStart/serial/DeleteFirst 0.71
228 TestMountStart/serial/VerifyMountPostDelete 0.38
229 TestMountStart/serial/Stop 1.35
230 TestMountStart/serial/RestartStopped 21.48
231 TestMountStart/serial/VerifyMountPostStop 0.39
234 TestMultiNode/serial/FreshStart2Nodes 117.1
235 TestMultiNode/serial/DeployApp2Nodes 4.82
236 TestMultiNode/serial/PingHostFrom2Pods 0.85
237 TestMultiNode/serial/AddNode 51.27
238 TestMultiNode/serial/MultiNodeLabels 0.06
239 TestMultiNode/serial/ProfileList 0.64
240 TestMultiNode/serial/CopyFile 7.63
241 TestMultiNode/serial/StopNode 2.66
242 TestMultiNode/serial/StartAfterStop 40.79
243 TestMultiNode/serial/RestartKeepsNodes 177.75
244 TestMultiNode/serial/DeleteNode 2.41
245 TestMultiNode/serial/StopMultiNode 23.92
246 TestMultiNode/serial/RestartMultiNode 90.41
247 TestMultiNode/serial/ValidateNameConflict 47.62
252 TestPreload 120.05
254 TestScheduledStopUnix 119.7
255 TestSkaffold 130.07
258 TestRunningBinaryUpgrade 160.37
260 TestKubernetesUpgrade 212.84
273 TestStoppedBinaryUpgrade/Setup 1.19
274 TestStoppedBinaryUpgrade/Upgrade 172.59
276 TestPause/serial/Start 95.22
277 TestPause/serial/SecondStartNoReconfiguration 76.89
286 TestNoKubernetes/serial/StartNoK8sWithVersion 0.07
287 TestNoKubernetes/serial/StartWithK8s 57.02
288 TestStoppedBinaryUpgrade/MinikubeLogs 1.23
289 TestNoKubernetes/serial/StartWithStopK8s 37.15
290 TestPause/serial/Pause 0.74
291 TestPause/serial/VerifyStatus 0.31
292 TestPause/serial/Unpause 0.7
293 TestPause/serial/PauseAgain 0.88
294 TestPause/serial/DeletePaused 0.92
295 TestPause/serial/VerifyDeletedResources 3.04
296 TestNoKubernetes/serial/Start 57.68
297 TestNoKubernetes/serial/VerifyK8sNotRunning 0.24
298 TestNoKubernetes/serial/ProfileList 8.95
299 TestNoKubernetes/serial/Stop 1.42
300 TestNoKubernetes/serial/StartNoArgs 48.04
301 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.23
302 TestNetworkPlugins/group/auto/Start 82.78
303 TestNetworkPlugins/group/kindnet/Start 113.13
304 TestNetworkPlugins/group/calico/Start 93.92
305 TestNetworkPlugins/group/auto/KubeletFlags 0.25
306 TestNetworkPlugins/group/auto/NetCatPod 12.3
307 TestNetworkPlugins/group/auto/DNS 0.23
308 TestNetworkPlugins/group/auto/Localhost 0.2
309 TestNetworkPlugins/group/auto/HairPin 0.18
310 TestNetworkPlugins/group/custom-flannel/Start 69.68
311 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
312 TestNetworkPlugins/group/kindnet/KubeletFlags 0.27
313 TestNetworkPlugins/group/kindnet/NetCatPod 11.47
314 TestNetworkPlugins/group/kindnet/DNS 0.23
315 TestNetworkPlugins/group/kindnet/Localhost 0.23
316 TestNetworkPlugins/group/kindnet/HairPin 0.19
317 TestNetworkPlugins/group/calico/ControllerPod 6.01
318 TestNetworkPlugins/group/calico/KubeletFlags 0.25
319 TestNetworkPlugins/group/calico/NetCatPod 11.41
320 TestNetworkPlugins/group/false/Start 75.51
321 TestNetworkPlugins/group/calico/DNS 0.36
322 TestNetworkPlugins/group/calico/Localhost 0.17
323 TestNetworkPlugins/group/calico/HairPin 0.19
324 TestNetworkPlugins/group/enable-default-cni/Start 64.92
325 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.25
326 TestNetworkPlugins/group/custom-flannel/NetCatPod 12.3
327 TestNetworkPlugins/group/custom-flannel/DNS 0.19
328 TestNetworkPlugins/group/custom-flannel/Localhost 0.18
329 TestNetworkPlugins/group/custom-flannel/HairPin 0.17
330 TestNetworkPlugins/group/flannel/Start 70.43
331 TestNetworkPlugins/group/bridge/Start 92.92
332 TestNetworkPlugins/group/false/KubeletFlags 0.26
333 TestNetworkPlugins/group/false/NetCatPod 11.32
334 TestNetworkPlugins/group/false/DNS 0.24
335 TestNetworkPlugins/group/false/Localhost 0.21
336 TestNetworkPlugins/group/false/HairPin 0.18
337 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.34
338 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.52
339 TestNetworkPlugins/group/enable-default-cni/DNS 0.22
340 TestNetworkPlugins/group/enable-default-cni/Localhost 0.16
341 TestNetworkPlugins/group/enable-default-cni/HairPin 0.19
342 TestNetworkPlugins/group/kubenet/Start 95.07
344 TestStartStop/group/old-k8s-version/serial/FirstStart 76.94
345 TestNetworkPlugins/group/flannel/ControllerPod 6.01
346 TestNetworkPlugins/group/flannel/KubeletFlags 0.25
347 TestNetworkPlugins/group/flannel/NetCatPod 12.29
348 TestNetworkPlugins/group/flannel/DNS 0.22
349 TestNetworkPlugins/group/flannel/Localhost 0.19
350 TestNetworkPlugins/group/flannel/HairPin 0.21
351 TestNetworkPlugins/group/bridge/KubeletFlags 0.28
352 TestNetworkPlugins/group/bridge/NetCatPod 11.32
354 TestStartStop/group/no-preload/serial/FirstStart 73.7
355 TestNetworkPlugins/group/bridge/DNS 0.3
356 TestNetworkPlugins/group/bridge/Localhost 0.21
357 TestNetworkPlugins/group/bridge/HairPin 0.18
359 TestStartStop/group/embed-certs/serial/FirstStart 76.16
360 TestStartStop/group/old-k8s-version/serial/DeployApp 10.39
361 TestNetworkPlugins/group/kubenet/KubeletFlags 0.25
362 TestNetworkPlugins/group/kubenet/NetCatPod 12.4
363 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.36
364 TestStartStop/group/old-k8s-version/serial/Stop 14.12
365 TestNetworkPlugins/group/kubenet/DNS 0.21
366 TestNetworkPlugins/group/kubenet/Localhost 0.16
367 TestNetworkPlugins/group/kubenet/HairPin 0.17
368 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.22
369 TestStartStop/group/old-k8s-version/serial/SecondStart 45.89
371 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 75
372 TestStartStop/group/no-preload/serial/DeployApp 9.41
373 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.33
374 TestStartStop/group/no-preload/serial/Stop 14.05
375 TestStartStop/group/embed-certs/serial/DeployApp 8.35
376 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.29
377 TestStartStop/group/no-preload/serial/SecondStart 53.51
378 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.3
379 TestStartStop/group/embed-certs/serial/Stop 12.42
380 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 14.01
381 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.22
382 TestStartStop/group/embed-certs/serial/SecondStart 53.32
383 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.1
384 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.25
385 TestStartStop/group/old-k8s-version/serial/Pause 3.73
387 TestStartStop/group/newest-cni/serial/FirstStart 71.52
388 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.42
389 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 10.01
390 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.27
391 TestStartStop/group/default-k8s-diff-port/serial/Stop 13.03
392 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.1
393 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.26
394 TestStartStop/group/no-preload/serial/Pause 3.42
395 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.23
396 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 51.29
397 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 8.01
398 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.09
399 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.26
400 TestStartStop/group/embed-certs/serial/Pause 3.18
401 TestStartStop/group/newest-cni/serial/DeployApp 0
402 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.97
403 TestStartStop/group/newest-cni/serial/Stop 13.38
404 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.21
405 TestStartStop/group/newest-cni/serial/SecondStart 34.43
406 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
407 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 6.1
408 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.25
409 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.79
410 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
411 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
412 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.22
413 TestStartStop/group/newest-cni/serial/Pause 2.64
x
+
TestDownloadOnly/v1.28.0/json-events (6.92s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-383930 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=kvm2  --auto-update-drivers=false
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-383930 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=kvm2  --auto-update-drivers=false: (6.91649075s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (6.92s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I0929 11:30:20.803405  595293 preload.go:131] Checking if preload exists for k8s version v1.28.0 and runtime docker
I0929 11:30:20.803515  595293 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21654-591397/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-383930
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-383930: exit status 85 (67.60456ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                     ARGS                                                                                     │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-383930 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=kvm2  --auto-update-drivers=false │ download-only-383930 │ jenkins │ v1.37.0 │ 29 Sep 25 11:30 UTC │          │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/29 11:30:13
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0929 11:30:13.930647  595305 out.go:360] Setting OutFile to fd 1 ...
	I0929 11:30:13.930774  595305 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 11:30:13.930782  595305 out.go:374] Setting ErrFile to fd 2...
	I0929 11:30:13.930788  595305 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 11:30:13.930984  595305 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21654-591397/.minikube/bin
	W0929 11:30:13.931132  595305 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21654-591397/.minikube/config/config.json: open /home/jenkins/minikube-integration/21654-591397/.minikube/config/config.json: no such file or directory
	I0929 11:30:13.931643  595305 out.go:368] Setting JSON to true
	I0929 11:30:13.932676  595305 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":4362,"bootTime":1759141052,"procs":210,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0929 11:30:13.932785  595305 start.go:140] virtualization: kvm guest
	I0929 11:30:13.935165  595305 out.go:99] [download-only-383930] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0929 11:30:13.935286  595305 notify.go:220] Checking for updates...
	W0929 11:30:13.935344  595305 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/21654-591397/.minikube/cache/preloaded-tarball: no such file or directory
	I0929 11:30:13.937014  595305 out.go:171] MINIKUBE_LOCATION=21654
	I0929 11:30:13.938588  595305 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0929 11:30:13.940055  595305 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21654-591397/kubeconfig
	I0929 11:30:13.941760  595305 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21654-591397/.minikube
	I0929 11:30:13.943345  595305 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W0929 11:30:13.946245  595305 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0929 11:30:13.946561  595305 driver.go:421] Setting default libvirt URI to qemu:///system
	I0929 11:30:13.978494  595305 out.go:99] Using the kvm2 driver based on user configuration
	I0929 11:30:13.978536  595305 start.go:304] selected driver: kvm2
	I0929 11:30:13.978549  595305 start.go:924] validating driver "kvm2" against <nil>
	I0929 11:30:13.978916  595305 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 11:30:13.979044  595305 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21654-591397/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0929 11:30:13.993821  595305 install.go:163] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I0929 11:30:13.993855  595305 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21654-591397/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0929 11:30:14.008132  595305 install.go:163] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I0929 11:30:14.008187  595305 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0929 11:30:14.008807  595305 start_flags.go:410] Using suggested 6144MB memory alloc based on sys=32093MB, container=0MB
	I0929 11:30:14.009019  595305 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I0929 11:30:14.009053  595305 cni.go:84] Creating CNI manager for ""
	I0929 11:30:14.009126  595305 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0929 11:30:14.009141  595305 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0929 11:30:14.009216  595305 start.go:348] cluster config:
	{Name:download-only-383930 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:6144 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-383930 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 11:30:14.009405  595305 iso.go:125] acquiring lock: {Name:mk3bf2644aacab696b9f4d566a6d81a30d75b71a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 11:30:14.011476  595305 out.go:99] Downloading VM boot image ...
	I0929 11:30:14.011528  595305 download.go:108] Downloading: https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso.sha256 -> /home/jenkins/minikube-integration/21654-591397/.minikube/cache/iso/amd64/minikube-v1.37.0-1758198818-20370-amd64.iso
	I0929 11:30:16.884250  595305 out.go:99] Starting "download-only-383930" primary control-plane node in "download-only-383930" cluster
	I0929 11:30:16.884291  595305 preload.go:131] Checking if preload exists for k8s version v1.28.0 and runtime docker
	I0929 11:30:16.904832  595305 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-amd64.tar.lz4
	I0929 11:30:16.904877  595305 cache.go:58] Caching tarball of preloaded images
	I0929 11:30:16.905147  595305 preload.go:131] Checking if preload exists for k8s version v1.28.0 and runtime docker
	I0929 11:30:16.907152  595305 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I0929 11:30:16.907173  595305 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.28.0-docker-overlay2-amd64.tar.lz4 ...
	I0929 11:30:16.932583  595305 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-amd64.tar.lz4?checksum=md5:8a955be835827bc584bcce0658a7fcc9 -> /home/jenkins/minikube-integration/21654-591397/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-amd64.tar.lz4
	
	
	* The control-plane node download-only-383930 host does not exist
	  To start a cluster, run: "minikube start -p download-only-383930"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-383930
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/json-events (3.9s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-221115 --force --alsologtostderr --kubernetes-version=v1.34.0 --container-runtime=docker --driver=kvm2  --auto-update-drivers=false
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-221115 --force --alsologtostderr --kubernetes-version=v1.34.0 --container-runtime=docker --driver=kvm2  --auto-update-drivers=false: (3.902674472s)
--- PASS: TestDownloadOnly/v1.34.0/json-events (3.90s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/preload-exists
I0929 11:30:25.067335  595293 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
I0929 11:30:25.067388  595293 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21654-591397/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-221115
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-221115: exit status 85 (64.725513ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                     ARGS                                                                                     │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-383930 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=kvm2  --auto-update-drivers=false │ download-only-383930 │ jenkins │ v1.37.0 │ 29 Sep 25 11:30 UTC │                     │
	│ delete  │ --all                                                                                                                                                                        │ minikube             │ jenkins │ v1.37.0 │ 29 Sep 25 11:30 UTC │ 29 Sep 25 11:30 UTC │
	│ delete  │ -p download-only-383930                                                                                                                                                      │ download-only-383930 │ jenkins │ v1.37.0 │ 29 Sep 25 11:30 UTC │ 29 Sep 25 11:30 UTC │
	│ start   │ -o=json --download-only -p download-only-221115 --force --alsologtostderr --kubernetes-version=v1.34.0 --container-runtime=docker --driver=kvm2  --auto-update-drivers=false │ download-only-221115 │ jenkins │ v1.37.0 │ 29 Sep 25 11:30 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/29 11:30:21
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0929 11:30:21.206360  595492 out.go:360] Setting OutFile to fd 1 ...
	I0929 11:30:21.206609  595492 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 11:30:21.206618  595492 out.go:374] Setting ErrFile to fd 2...
	I0929 11:30:21.206621  595492 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 11:30:21.206829  595492 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21654-591397/.minikube/bin
	I0929 11:30:21.207330  595492 out.go:368] Setting JSON to true
	I0929 11:30:21.208269  595492 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":4369,"bootTime":1759141052,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0929 11:30:21.208359  595492 start.go:140] virtualization: kvm guest
	I0929 11:30:21.210454  595492 out.go:99] [download-only-221115] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0929 11:30:21.210607  595492 notify.go:220] Checking for updates...
	I0929 11:30:21.212175  595492 out.go:171] MINIKUBE_LOCATION=21654
	I0929 11:30:21.213587  595492 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0929 11:30:21.214889  595492 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21654-591397/kubeconfig
	I0929 11:30:21.216250  595492 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21654-591397/.minikube
	I0929 11:30:21.217411  595492 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node download-only-221115 host does not exist
	  To start a cluster, run: "minikube start -p download-only-221115"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/DeleteAll (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.0/DeleteAll (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-221115
--- PASS: TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.66s)

                                                
                                                
=== RUN   TestBinaryMirror
I0929 11:30:25.708710  595293 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.0/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-005122 --alsologtostderr --binary-mirror http://127.0.0.1:35607 --driver=kvm2  --auto-update-drivers=false
helpers_test.go:175: Cleaning up "binary-mirror-005122" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-005122
--- PASS: TestBinaryMirror (0.66s)

                                                
                                    
x
+
TestOffline (61.87s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-docker-656654 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --auto-update-drivers=false
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-docker-656654 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --auto-update-drivers=false: (1m1.003610199s)
helpers_test.go:175: Cleaning up "offline-docker-656654" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-docker-656654
--- PASS: TestOffline (61.87s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-214441
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-214441: exit status 85 (55.500234ms)

                                                
                                                
-- stdout --
	* Profile "addons-214441" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-214441"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-214441
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-214441: exit status 85 (56.310899ms)

                                                
                                                
-- stdout --
	* Profile "addons-214441" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-214441"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (172.47s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p addons-214441 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --auto-update-drivers=false --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-amd64 start -p addons-214441 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --auto-update-drivers=false --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m52.472575025s)
--- PASS: TestAddons/Setup (172.47s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-214441 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-214441 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (9.64s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-214441 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-214441 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [d3744863-6650-4760-a0e5-ba5372140cd8] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [d3744863-6650-4760-a0e5-ba5372140cd8] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 9.004520452s
addons_test.go:694: (dbg) Run:  kubectl --context addons-214441 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-214441 describe sa gcp-auth-test
addons_test.go:744: (dbg) Run:  kubectl --context addons-214441 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (9.64s)

                                                
                                    
x
+
TestAddons/parallel/Registry (14.84s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 8.011747ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-66898fdd98-d7zx7" [e0282924-ef3d-48eb-8906-ea10f183b39e] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.006532914s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-grb7m" [d79830e7-a640-46a7-9b07-38e39aceac96] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004981121s
addons_test.go:392: (dbg) Run:  kubectl --context addons-214441 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-214441 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-214441 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.053003132s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-amd64 -p addons-214441 ip
2025/09/29 11:40:05 [DEBUG] GET http://192.168.39.76:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-214441 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (14.84s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.78s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 15.999733ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-214441
addons_test.go:332: (dbg) Run:  kubectl --context addons-214441 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-214441 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.78s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (6.23s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-xvvvf" [20d64e0b-248b-4034-92e2-2e9bd22a68f3] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.007921836s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-214441 addons disable inspektor-gadget --alsologtostderr -v=1
--- PASS: TestAddons/parallel/InspektorGadget (6.23s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (7.52s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 8.310545ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-zlrv7" [003bc9b9-be00-4e08-8af7-3443d0502181] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.005887056s
addons_test.go:463: (dbg) Run:  kubectl --context addons-214441 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-214441 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-214441 addons disable metrics-server --alsologtostderr -v=1: (1.422561573s)
--- PASS: TestAddons/parallel/MetricsServer (7.52s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (18.79s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-214441 --alsologtostderr -v=1
addons_test.go:808: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-214441 --alsologtostderr -v=1: (1.056560094s)
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:352: "headlamp-85f8f8dc54-lvssq" [db7383f7-6c6b-4651-b518-a91a9668edd9] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-85f8f8dc54-lvssq" [db7383f7-6c6b-4651-b518-a91a9668edd9] Running / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-85f8f8dc54-lvssq" [db7383f7-6c6b-4651-b518-a91a9668edd9] Running
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 12.006117563s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-214441 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-214441 addons disable headlamp --alsologtostderr -v=1: (5.731115251s)
--- PASS: TestAddons/parallel/Headlamp (18.79s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.56s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-85f6b7fc65-vpv4f" [315afdb8-1365-404c-8d62-28a67dfd358a] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.004226636s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-214441 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.56s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.42s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-x7b8m" [b96ad59d-d30d-4437-a5c7-ce0c5fc69348] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.003694941s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-214441 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.42s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (13.52s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-214441
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-214441: (13.233243847s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-214441
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-214441
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-214441
--- PASS: TestAddons/StoppedEnableDisable (13.52s)

                                                
                                    
x
+
TestCertOptions (67.01s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-466363 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --auto-update-drivers=false
E0929 12:42:45.197250  595293 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/functional-345567/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:42:50.765323  595293 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/skaffold-884283/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-466363 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --auto-update-drivers=false: (1m4.760548698s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-466363 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-466363 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-466363 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-466363" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-466363
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-466363: (1.52825119s)
--- PASS: TestCertOptions (67.01s)

                                                
                                    
x
+
TestCertExpiration (334.53s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-560501 --memory=3072 --cert-expiration=3m --driver=kvm2  --auto-update-drivers=false
E0929 12:40:48.265451  595293 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/functional-345567/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-560501 --memory=3072 --cert-expiration=3m --driver=kvm2  --auto-update-drivers=false: (1m10.398861985s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-560501 --memory=3072 --cert-expiration=8760h --driver=kvm2  --auto-update-drivers=false
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-560501 --memory=3072 --cert-expiration=8760h --driver=kvm2  --auto-update-drivers=false: (1m23.124536544s)
helpers_test.go:175: Cleaning up "cert-expiration-560501" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-560501
E0929 12:46:21.189774  595293 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/gvisor-756631/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-560501: (1.003426403s)
--- PASS: TestCertExpiration (334.53s)

                                                
                                    
x
+
TestDockerFlags (82.92s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-linux-amd64 start -p docker-flags-800441 --cache-images=false --memory=3072 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=kvm2  --auto-update-drivers=false
E0929 12:41:39.078807  595293 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/skaffold-884283/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:41:49.320904  595293 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/skaffold-884283/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
docker_test.go:51: (dbg) Done: out/minikube-linux-amd64 start -p docker-flags-800441 --cache-images=false --memory=3072 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=kvm2  --auto-update-drivers=false: (1m21.2608843s)
docker_test.go:56: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-800441 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-800441 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-800441" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-flags-800441
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-flags-800441: (1.161452593s)
--- PASS: TestDockerFlags (82.92s)

                                                
                                    
x
+
TestForceSystemdFlag (65.24s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-033952 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --auto-update-drivers=false
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-033952 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --auto-update-drivers=false: (1m3.961362706s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-033952 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-033952" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-033952
--- PASS: TestForceSystemdFlag (65.24s)

                                                
                                    
x
+
TestForceSystemdEnv (54.11s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-771105 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --auto-update-drivers=false
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-771105 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --auto-update-drivers=false: (52.6605638s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-771105 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-771105" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-771105
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-771105: (1.107458876s)
--- PASS: TestForceSystemdEnv (54.11s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0.62s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I0929 12:40:42.774281  595293 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0929 12:40:42.774460  595293 install.go:138] Validating docker-machine-driver-kvm2, PATH=/tmp/TestKVMDriverInstallOrUpdate263905422/001:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0929 12:40:42.812936  595293 install.go:163] /tmp/TestKVMDriverInstallOrUpdate263905422/001/docker-machine-driver-kvm2 version is 1.1.1
W0929 12:40:42.812995  595293 install.go:76] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.37.0
W0929 12:40:42.813151  595293 out.go:176] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0929 12:40:42.813217  595293 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.37.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.37.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate263905422/001/docker-machine-driver-kvm2
I0929 12:40:43.248309  595293 install.go:138] Validating docker-machine-driver-kvm2, PATH=/tmp/TestKVMDriverInstallOrUpdate263905422/001:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0929 12:40:43.264593  595293 install.go:163] /tmp/TestKVMDriverInstallOrUpdate263905422/001/docker-machine-driver-kvm2 version is 1.37.0
--- PASS: TestKVMDriverInstallOrUpdate (0.62s)

                                                
                                    
x
+
TestErrorSpam/setup (43.92s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-828935 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-828935 --driver=kvm2  --auto-update-drivers=false
E0929 11:48:18.895359  595293 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/addons-214441/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:48:18.901780  595293 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/addons-214441/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:48:18.913224  595293 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/addons-214441/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:48:18.934722  595293 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/addons-214441/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:48:18.976188  595293 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/addons-214441/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:48:19.057765  595293 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/addons-214441/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:48:19.219446  595293 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/addons-214441/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:48:19.541200  595293 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/addons-214441/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:48:20.183523  595293 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/addons-214441/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:48:21.465191  595293 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/addons-214441/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:48:24.028180  595293 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/addons-214441/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:48:29.149734  595293 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/addons-214441/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:48:39.391954  595293 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/addons-214441/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:48:59.873261  595293 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/addons-214441/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-828935 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-828935 --driver=kvm2  --auto-update-drivers=false: (43.920643535s)
--- PASS: TestErrorSpam/setup (43.92s)

                                                
                                    
x
+
TestErrorSpam/start (0.37s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-828935 --log_dir /tmp/nospam-828935 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-828935 --log_dir /tmp/nospam-828935 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-828935 --log_dir /tmp/nospam-828935 start --dry-run
--- PASS: TestErrorSpam/start (0.37s)

                                                
                                    
x
+
TestErrorSpam/status (0.82s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-828935 --log_dir /tmp/nospam-828935 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-828935 --log_dir /tmp/nospam-828935 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-828935 --log_dir /tmp/nospam-828935 status
--- PASS: TestErrorSpam/status (0.82s)

                                                
                                    
x
+
TestErrorSpam/pause (1.46s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-828935 --log_dir /tmp/nospam-828935 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-828935 --log_dir /tmp/nospam-828935 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-828935 --log_dir /tmp/nospam-828935 pause
--- PASS: TestErrorSpam/pause (1.46s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.68s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-828935 --log_dir /tmp/nospam-828935 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-828935 --log_dir /tmp/nospam-828935 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-828935 --log_dir /tmp/nospam-828935 unpause
--- PASS: TestErrorSpam/unpause (1.68s)

                                                
                                    
x
+
TestErrorSpam/stop (15.2s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-828935 --log_dir /tmp/nospam-828935 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-828935 --log_dir /tmp/nospam-828935 stop: (11.764691094s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-828935 --log_dir /tmp/nospam-828935 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-828935 --log_dir /tmp/nospam-828935 stop: (1.937323927s)
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-828935 --log_dir /tmp/nospam-828935 stop
error_spam_test.go:172: (dbg) Done: out/minikube-linux-amd64 -p nospam-828935 --log_dir /tmp/nospam-828935 stop: (1.500066489s)
--- PASS: TestErrorSpam/stop (15.20s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21654-591397/.minikube/files/etc/test/nested/copy/595293/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (61.31s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-345567 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --auto-update-drivers=false
E0929 11:49:40.836154  595293 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/addons-214441/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-345567 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --auto-update-drivers=false: (1m1.305739887s)
--- PASS: TestFunctional/serial/StartWithProxy (61.31s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (66.39s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0929 11:50:22.730407  595293 config.go:182] Loaded profile config "functional-345567": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.0
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-345567 --alsologtostderr -v=8
E0929 11:51:02.758356  595293 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/addons-214441/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-345567 --alsologtostderr -v=8: (1m6.387085651s)
functional_test.go:678: soft start took 1m6.387929946s for "functional-345567" cluster.
I0929 11:51:29.118000  595293 config.go:182] Loaded profile config "functional-345567": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.0
--- PASS: TestFunctional/serial/SoftStart (66.39s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-345567 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.12s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.35s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-345567 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-345567 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-345567 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.35s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (0.77s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-345567 /tmp/TestFunctionalserialCacheCmdcacheadd_local1043022049/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-345567 cache add minikube-local-cache-test:functional-345567
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-345567 cache delete minikube-local-cache-test:functional-345567
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-345567
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (0.77s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.23s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-345567 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.23s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.23s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-345567 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-345567 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-345567 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (235.21514ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-345567 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-345567 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.23s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-345567 kubectl -- --context functional-345567 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-345567 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (63.65s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-345567 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-345567 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (1m3.645845715s)
functional_test.go:776: restart took 1m3.645989113s for "functional-345567" cluster.
I0929 11:52:37.936696  595293 config.go:182] Loaded profile config "functional-345567": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.0
--- PASS: TestFunctional/serial/ExtraConfig (63.65s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-345567 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.13s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-345567 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-345567 logs: (1.125406179s)
--- PASS: TestFunctional/serial/LogsCmd (1.13s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.19s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-345567 logs --file /tmp/TestFunctionalserialLogsFileCmd1246269605/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-345567 logs --file /tmp/TestFunctionalserialLogsFileCmd1246269605/001/logs.txt: (1.193497174s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.19s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.04s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-345567 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-345567
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-345567: exit status 115 (297.923199ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬─────────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │             URL             │
	├───────────┼─────────────┼─────────────┼─────────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.39.165:32440 │
	└───────────┴─────────────┴─────────────┴─────────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-345567 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.04s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-345567 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-345567 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-345567 config get cpus: exit status 14 (56.030522ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-345567 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-345567 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-345567 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-345567 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-345567 config get cpus: exit status 14 (63.486241ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-345567 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --auto-update-drivers=false
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-345567 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --auto-update-drivers=false: exit status 23 (167.529541ms)

                                                
                                                
-- stdout --
	* [functional-345567] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21654
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21654-591397/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21654-591397/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0929 11:52:54.561250  608288 out.go:360] Setting OutFile to fd 1 ...
	I0929 11:52:54.561377  608288 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 11:52:54.561386  608288 out.go:374] Setting ErrFile to fd 2...
	I0929 11:52:54.561392  608288 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 11:52:54.561639  608288 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21654-591397/.minikube/bin
	I0929 11:52:54.562083  608288 out.go:368] Setting JSON to false
	I0929 11:52:54.563203  608288 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":5723,"bootTime":1759141052,"procs":254,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0929 11:52:54.563300  608288 start.go:140] virtualization: kvm guest
	I0929 11:52:54.565342  608288 out.go:179] * [functional-345567] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0929 11:52:54.566872  608288 out.go:179]   - MINIKUBE_LOCATION=21654
	I0929 11:52:54.566868  608288 notify.go:220] Checking for updates...
	I0929 11:52:54.569456  608288 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0929 11:52:54.571502  608288 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21654-591397/kubeconfig
	I0929 11:52:54.572800  608288 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21654-591397/.minikube
	I0929 11:52:54.573998  608288 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0929 11:52:54.575148  608288 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0929 11:52:54.577280  608288 config.go:182] Loaded profile config "functional-345567": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0929 11:52:54.577898  608288 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:52:54.577974  608288 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:52:54.594424  608288 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32907
	I0929 11:52:54.595047  608288 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:52:54.595945  608288 main.go:141] libmachine: Using API Version  1
	I0929 11:52:54.596020  608288 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:52:54.596513  608288 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:52:54.596784  608288 main.go:141] libmachine: (functional-345567) Calling .DriverName
	I0929 11:52:54.597074  608288 driver.go:421] Setting default libvirt URI to qemu:///system
	I0929 11:52:54.597416  608288 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:52:54.597479  608288 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:52:54.613561  608288 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45581
	I0929 11:52:54.614369  608288 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:52:54.614933  608288 main.go:141] libmachine: Using API Version  1
	I0929 11:52:54.614961  608288 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:52:54.615403  608288 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:52:54.615660  608288 main.go:141] libmachine: (functional-345567) Calling .DriverName
	I0929 11:52:54.652726  608288 out.go:179] * Using the kvm2 driver based on existing profile
	I0929 11:52:54.654256  608288 start.go:304] selected driver: kvm2
	I0929 11:52:54.654271  608288 start.go:924] validating driver "kvm2" against &{Name:functional-345567 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 Clu
sterName:functional-345567 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.165 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVe
rsion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 11:52:54.654419  608288 start.go:935] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0929 11:52:54.656753  608288 out.go:203] 
	W0929 11:52:54.658062  608288 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0929 11:52:54.659233  608288 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-345567 --dry-run --alsologtostderr -v=1 --driver=kvm2  --auto-update-drivers=false
--- PASS: TestFunctional/parallel/DryRun (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-345567 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --auto-update-drivers=false
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-345567 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --auto-update-drivers=false: exit status 23 (157.886732ms)

                                                
                                                
-- stdout --
	* [functional-345567] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21654
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21654-591397/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21654-591397/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0929 11:52:54.886007  608412 out.go:360] Setting OutFile to fd 1 ...
	I0929 11:52:54.886275  608412 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 11:52:54.886285  608412 out.go:374] Setting ErrFile to fd 2...
	I0929 11:52:54.886290  608412 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 11:52:54.886575  608412 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21654-591397/.minikube/bin
	I0929 11:52:54.887080  608412 out.go:368] Setting JSON to false
	I0929 11:52:54.888152  608412 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":5723,"bootTime":1759141052,"procs":254,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0929 11:52:54.888257  608412 start.go:140] virtualization: kvm guest
	I0929 11:52:54.890356  608412 out.go:179] * [functional-345567] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I0929 11:52:54.891776  608412 notify.go:220] Checking for updates...
	I0929 11:52:54.891846  608412 out.go:179]   - MINIKUBE_LOCATION=21654
	I0929 11:52:54.893445  608412 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0929 11:52:54.894736  608412 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21654-591397/kubeconfig
	I0929 11:52:54.896027  608412 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21654-591397/.minikube
	I0929 11:52:54.897194  608412 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0929 11:52:54.898527  608412 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0929 11:52:54.901462  608412 config.go:182] Loaded profile config "functional-345567": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0929 11:52:54.902092  608412 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:52:54.902190  608412 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:52:54.918838  608412 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34729
	I0929 11:52:54.919337  608412 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:52:54.919911  608412 main.go:141] libmachine: Using API Version  1
	I0929 11:52:54.919942  608412 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:52:54.920387  608412 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:52:54.920611  608412 main.go:141] libmachine: (functional-345567) Calling .DriverName
	I0929 11:52:54.920900  608412 driver.go:421] Setting default libvirt URI to qemu:///system
	I0929 11:52:54.921299  608412 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 11:52:54.921348  608412 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:52:54.936850  608412 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39265
	I0929 11:52:54.937516  608412 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:52:54.938257  608412 main.go:141] libmachine: Using API Version  1
	I0929 11:52:54.938293  608412 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:52:54.938784  608412 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:52:54.939026  608412 main.go:141] libmachine: (functional-345567) Calling .DriverName
	I0929 11:52:54.980510  608412 out.go:179] * Utilisation du pilote kvm2 basé sur le profil existant
	I0929 11:52:54.981656  608412 start.go:304] selected driver: kvm2
	I0929 11:52:54.981676  608412 start.go:924] validating driver "kvm2" against &{Name:functional-345567 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 Clu
sterName:functional-345567 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.165 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVe
rsion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 11:52:54.981806  608412 start.go:935] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0929 11:52:54.984075  608412 out.go:203] 
	W0929 11:52:54.986131  608412 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0929 11:52:54.987384  608412 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-345567 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-345567 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-345567 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (8.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-345567 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-345567 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-lrm8c" [281ce74f-9480-414f-9dc5-3ebcb644fa62] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-connect-7d85dfc575-lrm8c" [281ce74f-9480-414f-9dc5-3ebcb644fa62] Running
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.00690862s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-amd64 -p functional-345567 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.39.165:32062
functional_test.go:1680: http://192.168.39.165:32062: success! body:
Request served by hello-node-connect-7d85dfc575-lrm8c

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.39.165:32062
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (8.60s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-345567 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-345567 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-345567 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-345567 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-345567 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-345567 ssh -n functional-345567 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-345567 cp functional-345567:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd4140134175/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-345567 ssh -n functional-345567 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-345567 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-345567 ssh -n functional-345567 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.54s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/595293/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-345567 ssh "sudo cat /etc/test/nested/copy/595293/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/595293.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-345567 ssh "sudo cat /etc/ssl/certs/595293.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/595293.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-345567 ssh "sudo cat /usr/share/ca-certificates/595293.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-345567 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/5952932.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-345567 ssh "sudo cat /etc/ssl/certs/5952932.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/5952932.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-345567 ssh "sudo cat /usr/share/ca-certificates/5952932.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-345567 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.52s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-345567 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-345567 ssh "sudo systemctl is-active crio"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-345567 ssh "sudo systemctl is-active crio": exit status 1 (220.716384ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:514: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-345567 docker-env) && out/minikube-linux-amd64 status -p functional-345567"
functional_test.go:537: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-345567 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.97s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-345567 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-345567 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-345567 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (9.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-345567 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-345567 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-xr87t" [dbc57e0f-a72f-4bcc-8d09-6194ca16b3fa] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-75c85bcc94-xr87t" [dbc57e0f-a72f-4bcc-8d09-6194ca16b3fa] Running
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 9.026210593s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (9.25s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "313.827545ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "51.689768ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "286.900733ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "52.626617ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-345567 /tmp/TestFunctionalparallelMountCmdany-port262507385/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1759146768108563619" to /tmp/TestFunctionalparallelMountCmdany-port262507385/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1759146768108563619" to /tmp/TestFunctionalparallelMountCmdany-port262507385/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1759146768108563619" to /tmp/TestFunctionalparallelMountCmdany-port262507385/001/test-1759146768108563619
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-345567 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-345567 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (202.319231ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0929 11:52:48.311283  595293 retry.go:31] will retry after 315.54614ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-345567 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-345567 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep 29 11:52 created-by-test
-rw-r--r-- 1 docker docker 24 Sep 29 11:52 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep 29 11:52 test-1759146768108563619
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-345567 ssh cat /mount-9p/test-1759146768108563619
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-345567 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [d1875fd7-5c24-46c7-a646-68dea19862a6] Pending
helpers_test.go:352: "busybox-mount" [d1875fd7-5c24-46c7-a646-68dea19862a6] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [d1875fd7-5c24-46c7-a646-68dea19862a6] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [d1875fd7-5c24-46c7-a646-68dea19862a6] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.004038418s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-345567 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-345567 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-345567 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-345567 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-345567 /tmp/TestFunctionalparallelMountCmdany-port262507385/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.39s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-345567 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-345567 service list -o json
functional_test.go:1504: Took "306.144632ms" to run "out/minikube-linux-amd64 -p functional-345567 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-345567 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.39.165:31207
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-345567 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-345567 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-345567 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-345567 /tmp/TestFunctionalparallelMountCmdspecific-port2914162922/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-345567 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-345567 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (244.270967ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0929 11:52:55.740382  595293 retry.go:31] will retry after 663.688606ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-345567 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-345567 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-345567 /tmp/TestFunctionalparallelMountCmdspecific-port2914162922/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-345567 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-345567 ssh "sudo umount -f /mount-9p": exit status 1 (227.214033ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-345567 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-345567 /tmp/TestFunctionalparallelMountCmdspecific-port2914162922/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.02s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-345567 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.39.165:31207
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-345567 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4128700664/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-345567 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4128700664/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-345567 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4128700664/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-345567 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-345567 ssh "findmnt -T" /mount1: exit status 1 (355.490337ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0929 11:52:57.874351  595293 retry.go:31] will retry after 537.38955ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-345567 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-345567 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-345567 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-345567 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-345567 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4128700664/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-345567 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4128700664/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-345567 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4128700664/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.53s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-345567 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-345567 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.0
registry.k8s.io/kube-proxy:v1.34.0
registry.k8s.io/kube-controller-manager:v1.34.0
registry.k8s.io/kube-apiserver:v1.34.0
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/minikube-local-cache-test:functional-345567
docker.io/kicbase/echo-server:latest
docker.io/kicbase/echo-server:functional-345567
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-345567 image ls --format short --alsologtostderr:
I0929 11:53:04.729821  609534 out.go:360] Setting OutFile to fd 1 ...
I0929 11:53:04.729935  609534 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0929 11:53:04.729944  609534 out.go:374] Setting ErrFile to fd 2...
I0929 11:53:04.729948  609534 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0929 11:53:04.730199  609534 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21654-591397/.minikube/bin
I0929 11:53:04.730844  609534 config.go:182] Loaded profile config "functional-345567": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.0
I0929 11:53:04.730950  609534 config.go:182] Loaded profile config "functional-345567": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.0
I0929 11:53:04.731327  609534 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0929 11:53:04.731376  609534 main.go:141] libmachine: Launching plugin server for driver kvm2
I0929 11:53:04.746045  609534 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34407
I0929 11:53:04.746581  609534 main.go:141] libmachine: () Calling .GetVersion
I0929 11:53:04.747209  609534 main.go:141] libmachine: Using API Version  1
I0929 11:53:04.747251  609534 main.go:141] libmachine: () Calling .SetConfigRaw
I0929 11:53:04.747734  609534 main.go:141] libmachine: () Calling .GetMachineName
I0929 11:53:04.748003  609534 main.go:141] libmachine: (functional-345567) Calling .GetState
I0929 11:53:04.750328  609534 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0929 11:53:04.750418  609534 main.go:141] libmachine: Launching plugin server for driver kvm2
I0929 11:53:04.764826  609534 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34545
I0929 11:53:04.765391  609534 main.go:141] libmachine: () Calling .GetVersion
I0929 11:53:04.765957  609534 main.go:141] libmachine: Using API Version  1
I0929 11:53:04.765983  609534 main.go:141] libmachine: () Calling .SetConfigRaw
I0929 11:53:04.766354  609534 main.go:141] libmachine: () Calling .GetMachineName
I0929 11:53:04.766552  609534 main.go:141] libmachine: (functional-345567) Calling .DriverName
I0929 11:53:04.766787  609534 ssh_runner.go:195] Run: systemctl --version
I0929 11:53:04.766820  609534 main.go:141] libmachine: (functional-345567) Calling .GetSSHHostname
I0929 11:53:04.770963  609534 main.go:141] libmachine: (functional-345567) DBG | domain functional-345567 has defined MAC address 52:54:00:ee:f4:1f in network mk-functional-345567
I0929 11:53:04.771489  609534 main.go:141] libmachine: (functional-345567) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:f4:1f", ip: ""} in network mk-functional-345567: {Iface:virbr1 ExpiryTime:2025-09-29 12:49:37 +0000 UTC Type:0 Mac:52:54:00:ee:f4:1f Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:functional-345567 Clientid:01:52:54:00:ee:f4:1f}
I0929 11:53:04.771519  609534 main.go:141] libmachine: (functional-345567) DBG | domain functional-345567 has defined IP address 192.168.39.165 and MAC address 52:54:00:ee:f4:1f in network mk-functional-345567
I0929 11:53:04.771737  609534 main.go:141] libmachine: (functional-345567) Calling .GetSSHPort
I0929 11:53:04.771960  609534 main.go:141] libmachine: (functional-345567) Calling .GetSSHKeyPath
I0929 11:53:04.772138  609534 main.go:141] libmachine: (functional-345567) Calling .GetSSHUsername
I0929 11:53:04.772290  609534 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21654-591397/.minikube/machines/functional-345567/id_rsa Username:docker}
I0929 11:53:04.853639  609534 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0929 11:53:04.877814  609534 main.go:141] libmachine: Making call to close driver server
I0929 11:53:04.877833  609534 main.go:141] libmachine: (functional-345567) Calling .Close
I0929 11:53:04.878159  609534 main.go:141] libmachine: (functional-345567) DBG | Closing plugin on server side
I0929 11:53:04.878190  609534 main.go:141] libmachine: Successfully made call to close driver server
I0929 11:53:04.878207  609534 main.go:141] libmachine: Making call to close connection to plugin binary
I0929 11:53:04.878221  609534 main.go:141] libmachine: Making call to close driver server
I0929 11:53:04.878232  609534 main.go:141] libmachine: (functional-345567) Calling .Close
I0929 11:53:04.878472  609534 main.go:141] libmachine: Successfully made call to close driver server
I0929 11:53:04.878485  609534 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-345567 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-345567 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────────┬───────────────────┬───────────────┬────────┐
│                    IMAGE                    │        TAG        │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────────┼───────────────────┼───────────────┼────────┤
│ gcr.io/k8s-minikube/busybox                 │ 1.28.4-glibc      │ 56cc512116c8f │ 4.4MB  │
│ registry.k8s.io/pause                       │ latest            │ 350b164e7ae1d │ 240kB  │
│ registry.k8s.io/kube-controller-manager     │ v1.34.0           │ a0af72f2ec6d6 │ 74.9MB │
│ registry.k8s.io/etcd                        │ 3.6.4-0           │ 5f1f5298c888d │ 195MB  │
│ docker.io/library/minikube-local-cache-test │ functional-345567 │ db2130ef49472 │ 30B    │
│ registry.k8s.io/kube-proxy                  │ v1.34.0           │ df0860106674d │ 71.9MB │
│ registry.k8s.io/kube-scheduler              │ v1.34.0           │ 46169d968e920 │ 52.8MB │
│ registry.k8s.io/coredns/coredns             │ v1.12.1           │ 52546a367cc9e │ 75MB   │
│ gcr.io/k8s-minikube/storage-provisioner     │ v5                │ 6e38f40d628db │ 31.5MB │
│ registry.k8s.io/pause                       │ 3.3               │ 0184c1613d929 │ 683kB  │
│ registry.k8s.io/pause                       │ 3.1               │ da86e6ba6ca19 │ 742kB  │
│ registry.k8s.io/pause                       │ 3.10.1            │ cd073f4c5f6a8 │ 736kB  │
│ localhost/my-image                          │ functional-345567 │ dfb64e6d36c5d │ 1.24MB │
│ registry.k8s.io/kube-apiserver              │ v1.34.0           │ 90550c43ad2bc │ 88MB   │
│ docker.io/kicbase/echo-server               │ functional-345567 │ 9056ab77afb8e │ 4.94MB │
│ docker.io/kicbase/echo-server               │ latest            │ 9056ab77afb8e │ 4.94MB │
└─────────────────────────────────────────────┴───────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-345567 image ls --format table --alsologtostderr:
I0929 11:53:08.266782  609699 out.go:360] Setting OutFile to fd 1 ...
I0929 11:53:08.266882  609699 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0929 11:53:08.266888  609699 out.go:374] Setting ErrFile to fd 2...
I0929 11:53:08.266893  609699 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0929 11:53:08.267154  609699 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21654-591397/.minikube/bin
I0929 11:53:08.267809  609699 config.go:182] Loaded profile config "functional-345567": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.0
I0929 11:53:08.267910  609699 config.go:182] Loaded profile config "functional-345567": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.0
I0929 11:53:08.268282  609699 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0929 11:53:08.268351  609699 main.go:141] libmachine: Launching plugin server for driver kvm2
I0929 11:53:08.282843  609699 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42083
I0929 11:53:08.283443  609699 main.go:141] libmachine: () Calling .GetVersion
I0929 11:53:08.284080  609699 main.go:141] libmachine: Using API Version  1
I0929 11:53:08.284129  609699 main.go:141] libmachine: () Calling .SetConfigRaw
I0929 11:53:08.284571  609699 main.go:141] libmachine: () Calling .GetMachineName
I0929 11:53:08.284843  609699 main.go:141] libmachine: (functional-345567) Calling .GetState
I0929 11:53:08.287044  609699 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0929 11:53:08.287094  609699 main.go:141] libmachine: Launching plugin server for driver kvm2
I0929 11:53:08.301392  609699 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38841
I0929 11:53:08.301934  609699 main.go:141] libmachine: () Calling .GetVersion
I0929 11:53:08.302492  609699 main.go:141] libmachine: Using API Version  1
I0929 11:53:08.302524  609699 main.go:141] libmachine: () Calling .SetConfigRaw
I0929 11:53:08.302943  609699 main.go:141] libmachine: () Calling .GetMachineName
I0929 11:53:08.303148  609699 main.go:141] libmachine: (functional-345567) Calling .DriverName
I0929 11:53:08.303400  609699 ssh_runner.go:195] Run: systemctl --version
I0929 11:53:08.303431  609699 main.go:141] libmachine: (functional-345567) Calling .GetSSHHostname
I0929 11:53:08.306765  609699 main.go:141] libmachine: (functional-345567) DBG | domain functional-345567 has defined MAC address 52:54:00:ee:f4:1f in network mk-functional-345567
I0929 11:53:08.307227  609699 main.go:141] libmachine: (functional-345567) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:f4:1f", ip: ""} in network mk-functional-345567: {Iface:virbr1 ExpiryTime:2025-09-29 12:49:37 +0000 UTC Type:0 Mac:52:54:00:ee:f4:1f Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:functional-345567 Clientid:01:52:54:00:ee:f4:1f}
I0929 11:53:08.307262  609699 main.go:141] libmachine: (functional-345567) DBG | domain functional-345567 has defined IP address 192.168.39.165 and MAC address 52:54:00:ee:f4:1f in network mk-functional-345567
I0929 11:53:08.307432  609699 main.go:141] libmachine: (functional-345567) Calling .GetSSHPort
I0929 11:53:08.307604  609699 main.go:141] libmachine: (functional-345567) Calling .GetSSHKeyPath
I0929 11:53:08.307772  609699 main.go:141] libmachine: (functional-345567) Calling .GetSSHUsername
I0929 11:53:08.307915  609699 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21654-591397/.minikube/machines/functional-345567/id_rsa Username:docker}
I0929 11:53:08.389483  609699 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0929 11:53:08.421424  609699 main.go:141] libmachine: Making call to close driver server
I0929 11:53:08.421448  609699 main.go:141] libmachine: (functional-345567) Calling .Close
I0929 11:53:08.421759  609699 main.go:141] libmachine: Successfully made call to close driver server
I0929 11:53:08.421788  609699 main.go:141] libmachine: Making call to close connection to plugin binary
I0929 11:53:08.421797  609699 main.go:141] libmachine: Making call to close driver server
I0929 11:53:08.421797  609699 main.go:141] libmachine: (functional-345567) DBG | Closing plugin on server side
I0929 11:53:08.421804  609699 main.go:141] libmachine: (functional-345567) Calling .Close
I0929 11:53:08.422160  609699 main.go:141] libmachine: (functional-345567) DBG | Closing plugin on server side
I0929 11:53:08.422159  609699 main.go:141] libmachine: Successfully made call to close driver server
I0929 11:53:08.422204  609699 main.go:141] libmachine: Making call to close connection to plugin binary
E0929 11:53:18.895828  595293 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/addons-214441/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:53:46.600314  595293 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/addons-214441/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-345567 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-345567 image ls --format json --alsologtostderr:
[{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"db2130ef49472609942caa6195688753a35cb33ec76f0053574bbb01b3413a54","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-345567"],"size":"30"},{"id":"90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.0"],"size":"88000000"},{"id":"a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.0"],"size":"74900000"},{"id":"5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"195000000"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"736000"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16
b74c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-345567","docker.io/kicbase/echo-server:latest"],"size":"4940000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"dfb64e6d36c5d366cc627451b5859960a3f98a28be0df49f3b8d004a828fd06a","repoDigests":[],"repoTags":["localhost/my-image:functional-345567"],"size":"1240000"},{"id":"46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.0"],"size":"52800000"},{"id":"df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.34.0"],"size":"71900000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"0184c1613d92931126feb4c548e5da11015513
b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"75000000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-345567 image ls --format json --alsologtostderr:
I0929 11:53:08.052478  609675 out.go:360] Setting OutFile to fd 1 ...
I0929 11:53:08.052754  609675 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0929 11:53:08.052765  609675 out.go:374] Setting ErrFile to fd 2...
I0929 11:53:08.052770  609675 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0929 11:53:08.052975  609675 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21654-591397/.minikube/bin
I0929 11:53:08.053598  609675 config.go:182] Loaded profile config "functional-345567": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.0
I0929 11:53:08.053694  609675 config.go:182] Loaded profile config "functional-345567": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.0
I0929 11:53:08.054081  609675 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0929 11:53:08.054161  609675 main.go:141] libmachine: Launching plugin server for driver kvm2
I0929 11:53:08.068717  609675 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36593
I0929 11:53:08.069328  609675 main.go:141] libmachine: () Calling .GetVersion
I0929 11:53:08.069924  609675 main.go:141] libmachine: Using API Version  1
I0929 11:53:08.069952  609675 main.go:141] libmachine: () Calling .SetConfigRaw
I0929 11:53:08.070341  609675 main.go:141] libmachine: () Calling .GetMachineName
I0929 11:53:08.070551  609675 main.go:141] libmachine: (functional-345567) Calling .GetState
I0929 11:53:08.072507  609675 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0929 11:53:08.072560  609675 main.go:141] libmachine: Launching plugin server for driver kvm2
I0929 11:53:08.086669  609675 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33865
I0929 11:53:08.087158  609675 main.go:141] libmachine: () Calling .GetVersion
I0929 11:53:08.087758  609675 main.go:141] libmachine: Using API Version  1
I0929 11:53:08.087789  609675 main.go:141] libmachine: () Calling .SetConfigRaw
I0929 11:53:08.088252  609675 main.go:141] libmachine: () Calling .GetMachineName
I0929 11:53:08.088495  609675 main.go:141] libmachine: (functional-345567) Calling .DriverName
I0929 11:53:08.088804  609675 ssh_runner.go:195] Run: systemctl --version
I0929 11:53:08.088834  609675 main.go:141] libmachine: (functional-345567) Calling .GetSSHHostname
I0929 11:53:08.091944  609675 main.go:141] libmachine: (functional-345567) DBG | domain functional-345567 has defined MAC address 52:54:00:ee:f4:1f in network mk-functional-345567
I0929 11:53:08.092435  609675 main.go:141] libmachine: (functional-345567) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:f4:1f", ip: ""} in network mk-functional-345567: {Iface:virbr1 ExpiryTime:2025-09-29 12:49:37 +0000 UTC Type:0 Mac:52:54:00:ee:f4:1f Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:functional-345567 Clientid:01:52:54:00:ee:f4:1f}
I0929 11:53:08.092470  609675 main.go:141] libmachine: (functional-345567) DBG | domain functional-345567 has defined IP address 192.168.39.165 and MAC address 52:54:00:ee:f4:1f in network mk-functional-345567
I0929 11:53:08.092590  609675 main.go:141] libmachine: (functional-345567) Calling .GetSSHPort
I0929 11:53:08.092798  609675 main.go:141] libmachine: (functional-345567) Calling .GetSSHKeyPath
I0929 11:53:08.092982  609675 main.go:141] libmachine: (functional-345567) Calling .GetSSHUsername
I0929 11:53:08.093173  609675 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21654-591397/.minikube/machines/functional-345567/id_rsa Username:docker}
I0929 11:53:08.177132  609675 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0929 11:53:08.207133  609675 main.go:141] libmachine: Making call to close driver server
I0929 11:53:08.207150  609675 main.go:141] libmachine: (functional-345567) Calling .Close
I0929 11:53:08.207471  609675 main.go:141] libmachine: Successfully made call to close driver server
I0929 11:53:08.207493  609675 main.go:141] libmachine: Making call to close connection to plugin binary
I0929 11:53:08.207506  609675 main.go:141] libmachine: Making call to close driver server
I0929 11:53:08.207511  609675 main.go:141] libmachine: (functional-345567) DBG | Closing plugin on server side
I0929 11:53:08.207515  609675 main.go:141] libmachine: (functional-345567) Calling .Close
I0929 11:53:08.207859  609675 main.go:141] libmachine: Successfully made call to close driver server
I0929 11:53:08.207891  609675 main.go:141] libmachine: Making call to close connection to plugin binary
I0929 11:53:08.207919  609675 main.go:141] libmachine: (functional-345567) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-345567 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-345567 image ls --format yaml --alsologtostderr:
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: db2130ef49472609942caa6195688753a35cb33ec76f0053574bbb01b3413a54
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-345567
size: "30"
- id: df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.34.0
size: "71900000"
- id: 46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.0
size: "52800000"
- id: 5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "195000000"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10.1
size: "736000"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "75000000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.0
size: "88000000"
- id: a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.0
size: "74900000"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-345567
- docker.io/kicbase/echo-server:latest
size: "4940000"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-345567 image ls --format yaml --alsologtostderr:
I0929 11:53:04.933009  609558 out.go:360] Setting OutFile to fd 1 ...
I0929 11:53:04.933386  609558 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0929 11:53:04.933399  609558 out.go:374] Setting ErrFile to fd 2...
I0929 11:53:04.933403  609558 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0929 11:53:04.933710  609558 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21654-591397/.minikube/bin
I0929 11:53:04.934418  609558 config.go:182] Loaded profile config "functional-345567": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.0
I0929 11:53:04.934544  609558 config.go:182] Loaded profile config "functional-345567": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.0
I0929 11:53:04.934971  609558 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0929 11:53:04.935038  609558 main.go:141] libmachine: Launching plugin server for driver kvm2
I0929 11:53:04.949667  609558 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43035
I0929 11:53:04.950277  609558 main.go:141] libmachine: () Calling .GetVersion
I0929 11:53:04.950941  609558 main.go:141] libmachine: Using API Version  1
I0929 11:53:04.950968  609558 main.go:141] libmachine: () Calling .SetConfigRaw
I0929 11:53:04.951369  609558 main.go:141] libmachine: () Calling .GetMachineName
I0929 11:53:04.951594  609558 main.go:141] libmachine: (functional-345567) Calling .GetState
I0929 11:53:04.953876  609558 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0929 11:53:04.953927  609558 main.go:141] libmachine: Launching plugin server for driver kvm2
I0929 11:53:04.967877  609558 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46437
I0929 11:53:04.968414  609558 main.go:141] libmachine: () Calling .GetVersion
I0929 11:53:04.968897  609558 main.go:141] libmachine: Using API Version  1
I0929 11:53:04.968920  609558 main.go:141] libmachine: () Calling .SetConfigRaw
I0929 11:53:04.969352  609558 main.go:141] libmachine: () Calling .GetMachineName
I0929 11:53:04.969566  609558 main.go:141] libmachine: (functional-345567) Calling .DriverName
I0929 11:53:04.969852  609558 ssh_runner.go:195] Run: systemctl --version
I0929 11:53:04.969883  609558 main.go:141] libmachine: (functional-345567) Calling .GetSSHHostname
I0929 11:53:04.973437  609558 main.go:141] libmachine: (functional-345567) DBG | domain functional-345567 has defined MAC address 52:54:00:ee:f4:1f in network mk-functional-345567
I0929 11:53:04.973972  609558 main.go:141] libmachine: (functional-345567) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:f4:1f", ip: ""} in network mk-functional-345567: {Iface:virbr1 ExpiryTime:2025-09-29 12:49:37 +0000 UTC Type:0 Mac:52:54:00:ee:f4:1f Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:functional-345567 Clientid:01:52:54:00:ee:f4:1f}
I0929 11:53:04.973998  609558 main.go:141] libmachine: (functional-345567) DBG | domain functional-345567 has defined IP address 192.168.39.165 and MAC address 52:54:00:ee:f4:1f in network mk-functional-345567
I0929 11:53:04.974255  609558 main.go:141] libmachine: (functional-345567) Calling .GetSSHPort
I0929 11:53:04.974472  609558 main.go:141] libmachine: (functional-345567) Calling .GetSSHKeyPath
I0929 11:53:04.974615  609558 main.go:141] libmachine: (functional-345567) Calling .GetSSHUsername
I0929 11:53:04.974758  609558 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21654-591397/.minikube/machines/functional-345567/id_rsa Username:docker}
I0929 11:53:05.052018  609558 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0929 11:53:05.076961  609558 main.go:141] libmachine: Making call to close driver server
I0929 11:53:05.076979  609558 main.go:141] libmachine: (functional-345567) Calling .Close
I0929 11:53:05.077371  609558 main.go:141] libmachine: (functional-345567) DBG | Closing plugin on server side
I0929 11:53:05.077408  609558 main.go:141] libmachine: Successfully made call to close driver server
I0929 11:53:05.077434  609558 main.go:141] libmachine: Making call to close connection to plugin binary
I0929 11:53:05.077455  609558 main.go:141] libmachine: Making call to close driver server
I0929 11:53:05.077467  609558 main.go:141] libmachine: (functional-345567) Calling .Close
I0929 11:53:05.077755  609558 main.go:141] libmachine: Successfully made call to close driver server
I0929 11:53:05.077773  609558 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-345567 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-345567 ssh pgrep buildkitd: exit status 1 (198.203891ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-345567 image build -t localhost/my-image:functional-345567 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-345567 image build -t localhost/my-image:functional-345567 testdata/build --alsologtostderr: (2.52473599s)
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-345567 image build -t localhost/my-image:functional-345567 testdata/build --alsologtostderr:
I0929 11:53:05.329261  609612 out.go:360] Setting OutFile to fd 1 ...
I0929 11:53:05.329369  609612 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0929 11:53:05.329378  609612 out.go:374] Setting ErrFile to fd 2...
I0929 11:53:05.329384  609612 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0929 11:53:05.329594  609612 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21654-591397/.minikube/bin
I0929 11:53:05.330232  609612 config.go:182] Loaded profile config "functional-345567": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.0
I0929 11:53:05.331017  609612 config.go:182] Loaded profile config "functional-345567": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.0
I0929 11:53:05.331416  609612 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0929 11:53:05.331461  609612 main.go:141] libmachine: Launching plugin server for driver kvm2
I0929 11:53:05.345769  609612 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46123
I0929 11:53:05.346444  609612 main.go:141] libmachine: () Calling .GetVersion
I0929 11:53:05.347100  609612 main.go:141] libmachine: Using API Version  1
I0929 11:53:05.347144  609612 main.go:141] libmachine: () Calling .SetConfigRaw
I0929 11:53:05.347525  609612 main.go:141] libmachine: () Calling .GetMachineName
I0929 11:53:05.347728  609612 main.go:141] libmachine: (functional-345567) Calling .GetState
I0929 11:53:05.350278  609612 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0929 11:53:05.350326  609612 main.go:141] libmachine: Launching plugin server for driver kvm2
I0929 11:53:05.364247  609612 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46635
I0929 11:53:05.364750  609612 main.go:141] libmachine: () Calling .GetVersion
I0929 11:53:05.365359  609612 main.go:141] libmachine: Using API Version  1
I0929 11:53:05.365391  609612 main.go:141] libmachine: () Calling .SetConfigRaw
I0929 11:53:05.365796  609612 main.go:141] libmachine: () Calling .GetMachineName
I0929 11:53:05.366069  609612 main.go:141] libmachine: (functional-345567) Calling .DriverName
I0929 11:53:05.366331  609612 ssh_runner.go:195] Run: systemctl --version
I0929 11:53:05.366375  609612 main.go:141] libmachine: (functional-345567) Calling .GetSSHHostname
I0929 11:53:05.369557  609612 main.go:141] libmachine: (functional-345567) DBG | domain functional-345567 has defined MAC address 52:54:00:ee:f4:1f in network mk-functional-345567
I0929 11:53:05.369971  609612 main.go:141] libmachine: (functional-345567) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:f4:1f", ip: ""} in network mk-functional-345567: {Iface:virbr1 ExpiryTime:2025-09-29 12:49:37 +0000 UTC Type:0 Mac:52:54:00:ee:f4:1f Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:functional-345567 Clientid:01:52:54:00:ee:f4:1f}
I0929 11:53:05.370010  609612 main.go:141] libmachine: (functional-345567) DBG | domain functional-345567 has defined IP address 192.168.39.165 and MAC address 52:54:00:ee:f4:1f in network mk-functional-345567
I0929 11:53:05.370205  609612 main.go:141] libmachine: (functional-345567) Calling .GetSSHPort
I0929 11:53:05.370383  609612 main.go:141] libmachine: (functional-345567) Calling .GetSSHKeyPath
I0929 11:53:05.370549  609612 main.go:141] libmachine: (functional-345567) Calling .GetSSHUsername
I0929 11:53:05.370732  609612 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21654-591397/.minikube/machines/functional-345567/id_rsa Username:docker}
I0929 11:53:05.447866  609612 build_images.go:161] Building image from path: /tmp/build.445374811.tar
I0929 11:53:05.447934  609612 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0929 11:53:05.461655  609612 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.445374811.tar
I0929 11:53:05.467626  609612 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.445374811.tar: stat -c "%s %y" /var/lib/minikube/build/build.445374811.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.445374811.tar': No such file or directory
I0929 11:53:05.467671  609612 ssh_runner.go:362] scp /tmp/build.445374811.tar --> /var/lib/minikube/build/build.445374811.tar (3072 bytes)
I0929 11:53:05.501760  609612 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.445374811
I0929 11:53:05.515495  609612 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.445374811 -xf /var/lib/minikube/build/build.445374811.tar
I0929 11:53:05.529443  609612 docker.go:361] Building image: /var/lib/minikube/build/build.445374811
I0929 11:53:05.529553  609612 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-345567 /var/lib/minikube/build/build.445374811
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.2s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee 527B / 527B done
#5 sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a 1.46kB / 1.46kB done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.1s
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.2s
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.2s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.1s done
#5 DONE 0.4s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.2s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.0s done
#8 writing image sha256:dfb64e6d36c5d366cc627451b5859960a3f98a28be0df49f3b8d004a828fd06a
#8 writing image sha256:dfb64e6d36c5d366cc627451b5859960a3f98a28be0df49f3b8d004a828fd06a done
#8 naming to localhost/my-image:functional-345567 done
#8 DONE 0.1s
I0929 11:53:07.772123  609612 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-345567 /var/lib/minikube/build/build.445374811: (2.242513772s)
I0929 11:53:07.772209  609612 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.445374811
I0929 11:53:07.787317  609612 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.445374811.tar
I0929 11:53:07.800655  609612 build_images.go:217] Built localhost/my-image:functional-345567 from /tmp/build.445374811.tar
I0929 11:53:07.800700  609612 build_images.go:133] succeeded building to: functional-345567
I0929 11:53:07.800705  609612 build_images.go:134] failed building to: 
I0929 11:53:07.800732  609612 main.go:141] libmachine: Making call to close driver server
I0929 11:53:07.800743  609612 main.go:141] libmachine: (functional-345567) Calling .Close
I0929 11:53:07.801203  609612 main.go:141] libmachine: Successfully made call to close driver server
I0929 11:53:07.801241  609612 main.go:141] libmachine: Making call to close connection to plugin binary
I0929 11:53:07.801251  609612 main.go:141] libmachine: Making call to close driver server
I0929 11:53:07.801260  609612 main.go:141] libmachine: (functional-345567) Calling .Close
I0929 11:53:07.801213  609612 main.go:141] libmachine: (functional-345567) DBG | Closing plugin on server side
I0929 11:53:07.801551  609612 main.go:141] libmachine: Successfully made call to close driver server
I0929 11:53:07.801570  609612 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-345567 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.92s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-345567
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-345567 image load --daemon kicbase/echo-server:functional-345567 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-345567 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.01s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-345567 image load --daemon kicbase/echo-server:functional-345567 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-345567 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.74s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (0.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-345567
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-345567 image load --daemon kicbase/echo-server:functional-345567 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-345567 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (0.88s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-345567 image save kicbase/echo-server:functional-345567 /home/jenkins/workspace/KVM_Linux_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-345567 image rm kicbase/echo-server:functional-345567 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-345567 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-345567 image load /home/jenkins/workspace/KVM_Linux_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-345567 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-345567
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-345567 image save --daemon kicbase/echo-server:functional-345567 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect kicbase/echo-server:functional-345567
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.40s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-345567
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-345567
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-345567
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestGvisorAddon (141.53s)

                                                
                                                
=== RUN   TestGvisorAddon
=== PAUSE TestGvisorAddon

                                                
                                                

                                                
                                                
=== CONT  TestGvisorAddon
gvisor_addon_test.go:52: (dbg) Run:  out/minikube-linux-amd64 start -p gvisor-756631 --memory=3072 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2  --auto-update-drivers=false
gvisor_addon_test.go:52: (dbg) Done: out/minikube-linux-amd64 start -p gvisor-756631 --memory=3072 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2  --auto-update-drivers=false: (48.969338966s)
gvisor_addon_test.go:58: (dbg) Run:  out/minikube-linux-amd64 -p gvisor-756631 cache add gcr.io/k8s-minikube/gvisor-addon:2
gvisor_addon_test.go:58: (dbg) Done: out/minikube-linux-amd64 -p gvisor-756631 cache add gcr.io/k8s-minikube/gvisor-addon:2: (3.268210287s)
gvisor_addon_test.go:63: (dbg) Run:  out/minikube-linux-amd64 -p gvisor-756631 addons enable gvisor
gvisor_addon_test.go:63: (dbg) Done: out/minikube-linux-amd64 -p gvisor-756631 addons enable gvisor: (3.961807558s)
gvisor_addon_test.go:68: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "kubernetes.io/minikube-addons=gvisor" in namespace "kube-system" ...
helpers_test.go:352: "gvisor" [9863034c-3257-4f85-98f1-6e49f98492cd] Running
gvisor_addon_test.go:68: (dbg) TestGvisorAddon: kubernetes.io/minikube-addons=gvisor healthy within 6.004554682s
gvisor_addon_test.go:73: (dbg) Run:  kubectl --context gvisor-756631 replace --force -f testdata/nginx-gvisor.yaml
gvisor_addon_test.go:78: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "run=nginx,runtime=gvisor" in namespace "default" ...
helpers_test.go:352: "nginx-gvisor" [07fb9271-fd8e-4329-a4b0-13190de66db6] Pending
helpers_test.go:352: "nginx-gvisor" [07fb9271-fd8e-4329-a4b0-13190de66db6] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-gvisor" [07fb9271-fd8e-4329-a4b0-13190de66db6] Running
E0929 12:41:28.823972  595293 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/skaffold-884283/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:41:28.830423  595293 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/skaffold-884283/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:41:28.841923  595293 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/skaffold-884283/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:41:28.863475  595293 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/skaffold-884283/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:41:28.905228  595293 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/skaffold-884283/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:41:28.986759  595293 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/skaffold-884283/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
gvisor_addon_test.go:78: (dbg) TestGvisorAddon: run=nginx,runtime=gvisor healthy within 12.005095794s
gvisor_addon_test.go:83: (dbg) Run:  out/minikube-linux-amd64 stop -p gvisor-756631
E0929 12:41:29.148573  595293 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/skaffold-884283/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:41:29.470728  595293 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/skaffold-884283/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:41:30.112739  595293 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/skaffold-884283/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:41:31.394772  595293 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/skaffold-884283/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:41:33.956724  595293 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/skaffold-884283/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
gvisor_addon_test.go:83: (dbg) Done: out/minikube-linux-amd64 stop -p gvisor-756631: (7.728237192s)
gvisor_addon_test.go:88: (dbg) Run:  out/minikube-linux-amd64 start -p gvisor-756631 --memory=3072 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2  --auto-update-drivers=false
gvisor_addon_test.go:88: (dbg) Done: out/minikube-linux-amd64 start -p gvisor-756631 --memory=3072 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2  --auto-update-drivers=false: (47.334777808s)
gvisor_addon_test.go:92: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "kubernetes.io/minikube-addons=gvisor" in namespace "kube-system" ...
helpers_test.go:352: "gvisor" [9863034c-3257-4f85-98f1-6e49f98492cd] Running
gvisor_addon_test.go:92: (dbg) TestGvisorAddon: kubernetes.io/minikube-addons=gvisor healthy within 6.005296544s
gvisor_addon_test.go:95: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "run=nginx,runtime=gvisor" in namespace "default" ...
helpers_test.go:352: "nginx-gvisor" [07fb9271-fd8e-4329-a4b0-13190de66db6] Running / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
gvisor_addon_test.go:95: (dbg) TestGvisorAddon: run=nginx,runtime=gvisor healthy within 5.00460844s
helpers_test.go:175: Cleaning up "gvisor-756631" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p gvisor-756631
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p gvisor-756631: (1.072472245s)
--- PASS: TestGvisorAddon (141.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (229.02s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-813438 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2  --auto-update-drivers=false
E0929 12:03:18.895809  595293 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/addons-214441/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:04:41.961818  595293 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/addons-214441/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-813438 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2  --auto-update-drivers=false: (3m48.245351943s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-813438 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (229.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (5.98s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-813438 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-813438 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-813438 kubectl -- rollout status deployment/busybox: (3.48997261s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-813438 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-813438 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-813438 kubectl -- exec busybox-7b57f96db7-4lvrk -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-813438 kubectl -- exec busybox-7b57f96db7-l4jjx -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-813438 kubectl -- exec busybox-7b57f96db7-xjdpw -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-813438 kubectl -- exec busybox-7b57f96db7-4lvrk -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-813438 kubectl -- exec busybox-7b57f96db7-l4jjx -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-813438 kubectl -- exec busybox-7b57f96db7-xjdpw -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-813438 kubectl -- exec busybox-7b57f96db7-4lvrk -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-813438 kubectl -- exec busybox-7b57f96db7-l4jjx -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-813438 kubectl -- exec busybox-7b57f96db7-xjdpw -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (5.98s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.41s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-813438 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-813438 kubectl -- exec busybox-7b57f96db7-4lvrk -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-813438 kubectl -- exec busybox-7b57f96db7-4lvrk -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-813438 kubectl -- exec busybox-7b57f96db7-l4jjx -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-813438 kubectl -- exec busybox-7b57f96db7-l4jjx -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-813438 kubectl -- exec busybox-7b57f96db7-xjdpw -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-813438 kubectl -- exec busybox-7b57f96db7-xjdpw -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.41s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (51.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-813438 node add --alsologtostderr -v 5
E0929 12:07:45.198479  595293 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/functional-345567/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:07:45.205034  595293 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/functional-345567/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:07:45.216479  595293 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/functional-345567/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:07:45.237969  595293 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/functional-345567/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:07:45.279509  595293 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/functional-345567/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:07:45.361017  595293 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/functional-345567/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:07:45.523078  595293 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/functional-345567/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:07:45.844890  595293 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/functional-345567/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-813438 node add --alsologtostderr -v 5: (50.596214014s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-813438 status --alsologtostderr -v 5
E0929 12:07:46.486661  595293 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/functional-345567/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (51.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-813438 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.96s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
E0929 12:07:47.769054  595293 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/functional-345567/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.96s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (13.99s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-813438 status --output json --alsologtostderr -v 5
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-813438 cp testdata/cp-test.txt ha-813438:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-813438 ssh -n ha-813438 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-813438 cp ha-813438:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1658389809/001/cp-test_ha-813438.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-813438 ssh -n ha-813438 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-813438 cp ha-813438:/home/docker/cp-test.txt ha-813438-m02:/home/docker/cp-test_ha-813438_ha-813438-m02.txt
E0929 12:07:50.331288  595293 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/functional-345567/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-813438 ssh -n ha-813438 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-813438 ssh -n ha-813438-m02 "sudo cat /home/docker/cp-test_ha-813438_ha-813438-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-813438 cp ha-813438:/home/docker/cp-test.txt ha-813438-m03:/home/docker/cp-test_ha-813438_ha-813438-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-813438 ssh -n ha-813438 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-813438 ssh -n ha-813438-m03 "sudo cat /home/docker/cp-test_ha-813438_ha-813438-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-813438 cp ha-813438:/home/docker/cp-test.txt ha-813438-m04:/home/docker/cp-test_ha-813438_ha-813438-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-813438 ssh -n ha-813438 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-813438 ssh -n ha-813438-m04 "sudo cat /home/docker/cp-test_ha-813438_ha-813438-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-813438 cp testdata/cp-test.txt ha-813438-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-813438 ssh -n ha-813438-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-813438 cp ha-813438-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1658389809/001/cp-test_ha-813438-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-813438 ssh -n ha-813438-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-813438 cp ha-813438-m02:/home/docker/cp-test.txt ha-813438:/home/docker/cp-test_ha-813438-m02_ha-813438.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-813438 ssh -n ha-813438-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-813438 ssh -n ha-813438 "sudo cat /home/docker/cp-test_ha-813438-m02_ha-813438.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-813438 cp ha-813438-m02:/home/docker/cp-test.txt ha-813438-m03:/home/docker/cp-test_ha-813438-m02_ha-813438-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-813438 ssh -n ha-813438-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-813438 ssh -n ha-813438-m03 "sudo cat /home/docker/cp-test_ha-813438-m02_ha-813438-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-813438 cp ha-813438-m02:/home/docker/cp-test.txt ha-813438-m04:/home/docker/cp-test_ha-813438-m02_ha-813438-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-813438 ssh -n ha-813438-m02 "sudo cat /home/docker/cp-test.txt"
E0929 12:07:55.453554  595293 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/functional-345567/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-813438 ssh -n ha-813438-m04 "sudo cat /home/docker/cp-test_ha-813438-m02_ha-813438-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-813438 cp testdata/cp-test.txt ha-813438-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-813438 ssh -n ha-813438-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-813438 cp ha-813438-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1658389809/001/cp-test_ha-813438-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-813438 ssh -n ha-813438-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-813438 cp ha-813438-m03:/home/docker/cp-test.txt ha-813438:/home/docker/cp-test_ha-813438-m03_ha-813438.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-813438 ssh -n ha-813438-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-813438 ssh -n ha-813438 "sudo cat /home/docker/cp-test_ha-813438-m03_ha-813438.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-813438 cp ha-813438-m03:/home/docker/cp-test.txt ha-813438-m02:/home/docker/cp-test_ha-813438-m03_ha-813438-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-813438 ssh -n ha-813438-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-813438 ssh -n ha-813438-m02 "sudo cat /home/docker/cp-test_ha-813438-m03_ha-813438-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-813438 cp ha-813438-m03:/home/docker/cp-test.txt ha-813438-m04:/home/docker/cp-test_ha-813438-m03_ha-813438-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-813438 ssh -n ha-813438-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-813438 ssh -n ha-813438-m04 "sudo cat /home/docker/cp-test_ha-813438-m03_ha-813438-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-813438 cp testdata/cp-test.txt ha-813438-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-813438 ssh -n ha-813438-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-813438 cp ha-813438-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1658389809/001/cp-test_ha-813438-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-813438 ssh -n ha-813438-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-813438 cp ha-813438-m04:/home/docker/cp-test.txt ha-813438:/home/docker/cp-test_ha-813438-m04_ha-813438.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-813438 ssh -n ha-813438-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-813438 ssh -n ha-813438 "sudo cat /home/docker/cp-test_ha-813438-m04_ha-813438.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-813438 cp ha-813438-m04:/home/docker/cp-test.txt ha-813438-m02:/home/docker/cp-test_ha-813438-m04_ha-813438-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-813438 ssh -n ha-813438-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-813438 ssh -n ha-813438-m02 "sudo cat /home/docker/cp-test_ha-813438-m04_ha-813438-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-813438 cp ha-813438-m04:/home/docker/cp-test.txt ha-813438-m03:/home/docker/cp-test_ha-813438-m04_ha-813438-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-813438 ssh -n ha-813438-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-813438 ssh -n ha-813438-m03 "sudo cat /home/docker/cp-test_ha-813438-m04_ha-813438-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (13.99s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (15.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-813438 node stop m02 --alsologtostderr -v 5
E0929 12:08:05.695272  595293 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/functional-345567/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-813438 node stop m02 --alsologtostderr -v 5: (14.828554114s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-813438 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-813438 status --alsologtostderr -v 5: exit status 7 (709.8462ms)

                                                
                                                
-- stdout --
	ha-813438
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-813438-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-813438-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-813438-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0929 12:08:17.285182  616416 out.go:360] Setting OutFile to fd 1 ...
	I0929 12:08:17.285294  616416 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 12:08:17.285300  616416 out.go:374] Setting ErrFile to fd 2...
	I0929 12:08:17.285306  616416 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 12:08:17.285553  616416 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21654-591397/.minikube/bin
	I0929 12:08:17.285784  616416 out.go:368] Setting JSON to false
	I0929 12:08:17.285835  616416 mustload.go:65] Loading cluster: ha-813438
	I0929 12:08:17.285894  616416 notify.go:220] Checking for updates...
	I0929 12:08:17.286327  616416 config.go:182] Loaded profile config "ha-813438": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0929 12:08:17.286351  616416 status.go:174] checking status of ha-813438 ...
	I0929 12:08:17.286773  616416 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 12:08:17.286820  616416 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 12:08:17.310196  616416 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37941
	I0929 12:08:17.310845  616416 main.go:141] libmachine: () Calling .GetVersion
	I0929 12:08:17.311696  616416 main.go:141] libmachine: Using API Version  1
	I0929 12:08:17.311738  616416 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 12:08:17.312183  616416 main.go:141] libmachine: () Calling .GetMachineName
	I0929 12:08:17.312402  616416 main.go:141] libmachine: (ha-813438) Calling .GetState
	I0929 12:08:17.314817  616416 status.go:371] ha-813438 host status = "Running" (err=<nil>)
	I0929 12:08:17.314840  616416 host.go:66] Checking if "ha-813438" exists ...
	I0929 12:08:17.315227  616416 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 12:08:17.315298  616416 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 12:08:17.329981  616416 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37213
	I0929 12:08:17.330586  616416 main.go:141] libmachine: () Calling .GetVersion
	I0929 12:08:17.331087  616416 main.go:141] libmachine: Using API Version  1
	I0929 12:08:17.331120  616416 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 12:08:17.331519  616416 main.go:141] libmachine: () Calling .GetMachineName
	I0929 12:08:17.331782  616416 main.go:141] libmachine: (ha-813438) Calling .GetIP
	I0929 12:08:17.335471  616416 main.go:141] libmachine: (ha-813438) DBG | domain ha-813438 has defined MAC address 52:54:00:e9:1c:f7 in network mk-ha-813438
	I0929 12:08:17.336093  616416 main.go:141] libmachine: (ha-813438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:1c:f7", ip: ""} in network mk-ha-813438: {Iface:virbr1 ExpiryTime:2025-09-29 13:03:15 +0000 UTC Type:0 Mac:52:54:00:e9:1c:f7 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-813438 Clientid:01:52:54:00:e9:1c:f7}
	I0929 12:08:17.336148  616416 main.go:141] libmachine: (ha-813438) DBG | domain ha-813438 has defined IP address 192.168.39.12 and MAC address 52:54:00:e9:1c:f7 in network mk-ha-813438
	I0929 12:08:17.336383  616416 host.go:66] Checking if "ha-813438" exists ...
	I0929 12:08:17.336733  616416 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 12:08:17.336782  616416 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 12:08:17.350789  616416 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33133
	I0929 12:08:17.351462  616416 main.go:141] libmachine: () Calling .GetVersion
	I0929 12:08:17.352018  616416 main.go:141] libmachine: Using API Version  1
	I0929 12:08:17.352045  616416 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 12:08:17.352465  616416 main.go:141] libmachine: () Calling .GetMachineName
	I0929 12:08:17.352695  616416 main.go:141] libmachine: (ha-813438) Calling .DriverName
	I0929 12:08:17.352937  616416 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0929 12:08:17.352970  616416 main.go:141] libmachine: (ha-813438) Calling .GetSSHHostname
	I0929 12:08:17.356649  616416 main.go:141] libmachine: (ha-813438) DBG | domain ha-813438 has defined MAC address 52:54:00:e9:1c:f7 in network mk-ha-813438
	I0929 12:08:17.357178  616416 main.go:141] libmachine: (ha-813438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:1c:f7", ip: ""} in network mk-ha-813438: {Iface:virbr1 ExpiryTime:2025-09-29 13:03:15 +0000 UTC Type:0 Mac:52:54:00:e9:1c:f7 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-813438 Clientid:01:52:54:00:e9:1c:f7}
	I0929 12:08:17.357209  616416 main.go:141] libmachine: (ha-813438) DBG | domain ha-813438 has defined IP address 192.168.39.12 and MAC address 52:54:00:e9:1c:f7 in network mk-ha-813438
	I0929 12:08:17.357384  616416 main.go:141] libmachine: (ha-813438) Calling .GetSSHPort
	I0929 12:08:17.357565  616416 main.go:141] libmachine: (ha-813438) Calling .GetSSHKeyPath
	I0929 12:08:17.357737  616416 main.go:141] libmachine: (ha-813438) Calling .GetSSHUsername
	I0929 12:08:17.357898  616416 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21654-591397/.minikube/machines/ha-813438/id_rsa Username:docker}
	I0929 12:08:17.448178  616416 ssh_runner.go:195] Run: systemctl --version
	I0929 12:08:17.456555  616416 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0929 12:08:17.475721  616416 kubeconfig.go:125] found "ha-813438" server: "https://192.168.39.254:8443"
	I0929 12:08:17.475767  616416 api_server.go:166] Checking apiserver status ...
	I0929 12:08:17.475804  616416 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0929 12:08:17.499785  616416 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2495/cgroup
	W0929 12:08:17.521210  616416 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2495/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0929 12:08:17.521279  616416 ssh_runner.go:195] Run: ls
	I0929 12:08:17.532079  616416 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0929 12:08:17.538424  616416 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0929 12:08:17.538459  616416 status.go:463] ha-813438 apiserver status = Running (err=<nil>)
	I0929 12:08:17.538473  616416 status.go:176] ha-813438 status: &{Name:ha-813438 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0929 12:08:17.538499  616416 status.go:174] checking status of ha-813438-m02 ...
	I0929 12:08:17.538873  616416 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 12:08:17.538923  616416 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 12:08:17.552683  616416 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34461
	I0929 12:08:17.553210  616416 main.go:141] libmachine: () Calling .GetVersion
	I0929 12:08:17.553650  616416 main.go:141] libmachine: Using API Version  1
	I0929 12:08:17.553679  616416 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 12:08:17.554032  616416 main.go:141] libmachine: () Calling .GetMachineName
	I0929 12:08:17.554242  616416 main.go:141] libmachine: (ha-813438-m02) Calling .GetState
	I0929 12:08:17.556245  616416 status.go:371] ha-813438-m02 host status = "Stopped" (err=<nil>)
	I0929 12:08:17.556266  616416 status.go:384] host is not running, skipping remaining checks
	I0929 12:08:17.556274  616416 status.go:176] ha-813438-m02 status: &{Name:ha-813438-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0929 12:08:17.556298  616416 status.go:174] checking status of ha-813438-m03 ...
	I0929 12:08:17.556587  616416 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 12:08:17.556637  616416 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 12:08:17.571467  616416 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37913
	I0929 12:08:17.571941  616416 main.go:141] libmachine: () Calling .GetVersion
	I0929 12:08:17.572419  616416 main.go:141] libmachine: Using API Version  1
	I0929 12:08:17.572448  616416 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 12:08:17.572793  616416 main.go:141] libmachine: () Calling .GetMachineName
	I0929 12:08:17.572988  616416 main.go:141] libmachine: (ha-813438-m03) Calling .GetState
	I0929 12:08:17.574947  616416 status.go:371] ha-813438-m03 host status = "Running" (err=<nil>)
	I0929 12:08:17.574964  616416 host.go:66] Checking if "ha-813438-m03" exists ...
	I0929 12:08:17.575300  616416 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 12:08:17.575356  616416 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 12:08:17.590148  616416 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39311
	I0929 12:08:17.590634  616416 main.go:141] libmachine: () Calling .GetVersion
	I0929 12:08:17.591096  616416 main.go:141] libmachine: Using API Version  1
	I0929 12:08:17.591164  616416 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 12:08:17.591596  616416 main.go:141] libmachine: () Calling .GetMachineName
	I0929 12:08:17.591837  616416 main.go:141] libmachine: (ha-813438-m03) Calling .GetIP
	I0929 12:08:17.595153  616416 main.go:141] libmachine: (ha-813438-m03) DBG | domain ha-813438-m03 has defined MAC address 52:54:00:a9:f6:29 in network mk-ha-813438
	I0929 12:08:17.595618  616416 main.go:141] libmachine: (ha-813438-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:f6:29", ip: ""} in network mk-ha-813438: {Iface:virbr1 ExpiryTime:2025-09-29 13:05:31 +0000 UTC Type:0 Mac:52:54:00:a9:f6:29 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:ha-813438-m03 Clientid:01:52:54:00:a9:f6:29}
	I0929 12:08:17.595654  616416 main.go:141] libmachine: (ha-813438-m03) DBG | domain ha-813438-m03 has defined IP address 192.168.39.144 and MAC address 52:54:00:a9:f6:29 in network mk-ha-813438
	I0929 12:08:17.595796  616416 host.go:66] Checking if "ha-813438-m03" exists ...
	I0929 12:08:17.596184  616416 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 12:08:17.596226  616416 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 12:08:17.612246  616416 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32781
	I0929 12:08:17.612879  616416 main.go:141] libmachine: () Calling .GetVersion
	I0929 12:08:17.613521  616416 main.go:141] libmachine: Using API Version  1
	I0929 12:08:17.613555  616416 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 12:08:17.613957  616416 main.go:141] libmachine: () Calling .GetMachineName
	I0929 12:08:17.614174  616416 main.go:141] libmachine: (ha-813438-m03) Calling .DriverName
	I0929 12:08:17.614397  616416 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0929 12:08:17.614421  616416 main.go:141] libmachine: (ha-813438-m03) Calling .GetSSHHostname
	I0929 12:08:17.617744  616416 main.go:141] libmachine: (ha-813438-m03) DBG | domain ha-813438-m03 has defined MAC address 52:54:00:a9:f6:29 in network mk-ha-813438
	I0929 12:08:17.618322  616416 main.go:141] libmachine: (ha-813438-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:f6:29", ip: ""} in network mk-ha-813438: {Iface:virbr1 ExpiryTime:2025-09-29 13:05:31 +0000 UTC Type:0 Mac:52:54:00:a9:f6:29 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:ha-813438-m03 Clientid:01:52:54:00:a9:f6:29}
	I0929 12:08:17.618359  616416 main.go:141] libmachine: (ha-813438-m03) DBG | domain ha-813438-m03 has defined IP address 192.168.39.144 and MAC address 52:54:00:a9:f6:29 in network mk-ha-813438
	I0929 12:08:17.618555  616416 main.go:141] libmachine: (ha-813438-m03) Calling .GetSSHPort
	I0929 12:08:17.618795  616416 main.go:141] libmachine: (ha-813438-m03) Calling .GetSSHKeyPath
	I0929 12:08:17.619023  616416 main.go:141] libmachine: (ha-813438-m03) Calling .GetSSHUsername
	I0929 12:08:17.619220  616416 sshutil.go:53] new ssh client: &{IP:192.168.39.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21654-591397/.minikube/machines/ha-813438-m03/id_rsa Username:docker}
	I0929 12:08:17.709185  616416 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0929 12:08:17.729042  616416 kubeconfig.go:125] found "ha-813438" server: "https://192.168.39.254:8443"
	I0929 12:08:17.729077  616416 api_server.go:166] Checking apiserver status ...
	I0929 12:08:17.729129  616416 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0929 12:08:17.750892  616416 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2332/cgroup
	W0929 12:08:17.764483  616416 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2332/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0929 12:08:17.764566  616416 ssh_runner.go:195] Run: ls
	I0929 12:08:17.770470  616416 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0929 12:08:17.775627  616416 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0929 12:08:17.775656  616416 status.go:463] ha-813438-m03 apiserver status = Running (err=<nil>)
	I0929 12:08:17.775666  616416 status.go:176] ha-813438-m03 status: &{Name:ha-813438-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0929 12:08:17.775682  616416 status.go:174] checking status of ha-813438-m04 ...
	I0929 12:08:17.776033  616416 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 12:08:17.776082  616416 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 12:08:17.790028  616416 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35379
	I0929 12:08:17.790592  616416 main.go:141] libmachine: () Calling .GetVersion
	I0929 12:08:17.791079  616416 main.go:141] libmachine: Using API Version  1
	I0929 12:08:17.791127  616416 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 12:08:17.791550  616416 main.go:141] libmachine: () Calling .GetMachineName
	I0929 12:08:17.791879  616416 main.go:141] libmachine: (ha-813438-m04) Calling .GetState
	I0929 12:08:17.793751  616416 status.go:371] ha-813438-m04 host status = "Running" (err=<nil>)
	I0929 12:08:17.793770  616416 host.go:66] Checking if "ha-813438-m04" exists ...
	I0929 12:08:17.794086  616416 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 12:08:17.794161  616416 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 12:08:17.810009  616416 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35077
	I0929 12:08:17.810539  616416 main.go:141] libmachine: () Calling .GetVersion
	I0929 12:08:17.811070  616416 main.go:141] libmachine: Using API Version  1
	I0929 12:08:17.811087  616416 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 12:08:17.811456  616416 main.go:141] libmachine: () Calling .GetMachineName
	I0929 12:08:17.811668  616416 main.go:141] libmachine: (ha-813438-m04) Calling .GetIP
	I0929 12:08:17.814977  616416 main.go:141] libmachine: (ha-813438-m04) DBG | domain ha-813438-m04 has defined MAC address 52:54:00:54:e6:ec in network mk-ha-813438
	I0929 12:08:17.815598  616416 main.go:141] libmachine: (ha-813438-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:e6:ec", ip: ""} in network mk-ha-813438: {Iface:virbr1 ExpiryTime:2025-09-29 13:07:13 +0000 UTC Type:0 Mac:52:54:00:54:e6:ec Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:ha-813438-m04 Clientid:01:52:54:00:54:e6:ec}
	I0929 12:08:17.815615  616416 main.go:141] libmachine: (ha-813438-m04) DBG | domain ha-813438-m04 has defined IP address 192.168.39.146 and MAC address 52:54:00:54:e6:ec in network mk-ha-813438
	I0929 12:08:17.815814  616416 host.go:66] Checking if "ha-813438-m04" exists ...
	I0929 12:08:17.816273  616416 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 12:08:17.816328  616416 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 12:08:17.830999  616416 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38099
	I0929 12:08:17.831534  616416 main.go:141] libmachine: () Calling .GetVersion
	I0929 12:08:17.831975  616416 main.go:141] libmachine: Using API Version  1
	I0929 12:08:17.831998  616416 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 12:08:17.832431  616416 main.go:141] libmachine: () Calling .GetMachineName
	I0929 12:08:17.832667  616416 main.go:141] libmachine: (ha-813438-m04) Calling .DriverName
	I0929 12:08:17.832941  616416 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0929 12:08:17.832970  616416 main.go:141] libmachine: (ha-813438-m04) Calling .GetSSHHostname
	I0929 12:08:17.837019  616416 main.go:141] libmachine: (ha-813438-m04) DBG | domain ha-813438-m04 has defined MAC address 52:54:00:54:e6:ec in network mk-ha-813438
	I0929 12:08:17.837754  616416 main.go:141] libmachine: (ha-813438-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:e6:ec", ip: ""} in network mk-ha-813438: {Iface:virbr1 ExpiryTime:2025-09-29 13:07:13 +0000 UTC Type:0 Mac:52:54:00:54:e6:ec Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:ha-813438-m04 Clientid:01:52:54:00:54:e6:ec}
	I0929 12:08:17.837786  616416 main.go:141] libmachine: (ha-813438-m04) DBG | domain ha-813438-m04 has defined IP address 192.168.39.146 and MAC address 52:54:00:54:e6:ec in network mk-ha-813438
	I0929 12:08:17.838072  616416 main.go:141] libmachine: (ha-813438-m04) Calling .GetSSHPort
	I0929 12:08:17.838342  616416 main.go:141] libmachine: (ha-813438-m04) Calling .GetSSHKeyPath
	I0929 12:08:17.838568  616416 main.go:141] libmachine: (ha-813438-m04) Calling .GetSSHUsername
	I0929 12:08:17.838797  616416 sshutil.go:53] new ssh client: &{IP:192.168.39.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21654-591397/.minikube/machines/ha-813438-m04/id_rsa Username:docker}
	I0929 12:08:17.923266  616416 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0929 12:08:17.944980  616416 status.go:176] ha-813438-m04 status: &{Name:ha-813438-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (15.54s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (25.15s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-813438 node start m02 --alsologtostderr -v 5
E0929 12:08:18.895044  595293 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/addons-214441/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:08:26.176733  595293 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/functional-345567/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-813438 node start m02 --alsologtostderr -v 5: (24.027485164s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-813438 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-linux-amd64 -p ha-813438 status --alsologtostderr -v 5: (1.039130334s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (25.15s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.04s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (1.043621342s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.04s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (158.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-813438 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-813438 stop --alsologtostderr -v 5
E0929 12:09:07.138267  595293 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/functional-345567/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-813438 stop --alsologtostderr -v 5: (41.691779024s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-813438 start --wait true --alsologtostderr -v 5
E0929 12:10:29.060569  595293 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/functional-345567/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-813438 start --wait true --alsologtostderr -v 5: (1m56.26886402s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-813438 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (158.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (7.89s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-813438 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-813438 node delete m03 --alsologtostderr -v 5: (7.04979658s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-813438 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (7.89s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (41s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-813438 stop --alsologtostderr -v 5
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-813438 stop --alsologtostderr -v 5: (40.894119961s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-813438 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-813438 status --alsologtostderr -v 5: exit status 7 (101.870761ms)

                                                
                                                
-- stdout --
	ha-813438
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-813438-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-813438-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0929 12:12:12.474084  618632 out.go:360] Setting OutFile to fd 1 ...
	I0929 12:12:12.474578  618632 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 12:12:12.474590  618632 out.go:374] Setting ErrFile to fd 2...
	I0929 12:12:12.474594  618632 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 12:12:12.474799  618632 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21654-591397/.minikube/bin
	I0929 12:12:12.474973  618632 out.go:368] Setting JSON to false
	I0929 12:12:12.475007  618632 mustload.go:65] Loading cluster: ha-813438
	I0929 12:12:12.475054  618632 notify.go:220] Checking for updates...
	I0929 12:12:12.475588  618632 config.go:182] Loaded profile config "ha-813438": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0929 12:12:12.475618  618632 status.go:174] checking status of ha-813438 ...
	I0929 12:12:12.476152  618632 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 12:12:12.476201  618632 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 12:12:12.489965  618632 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37883
	I0929 12:12:12.490509  618632 main.go:141] libmachine: () Calling .GetVersion
	I0929 12:12:12.491258  618632 main.go:141] libmachine: Using API Version  1
	I0929 12:12:12.491287  618632 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 12:12:12.491642  618632 main.go:141] libmachine: () Calling .GetMachineName
	I0929 12:12:12.491864  618632 main.go:141] libmachine: (ha-813438) Calling .GetState
	I0929 12:12:12.493528  618632 status.go:371] ha-813438 host status = "Stopped" (err=<nil>)
	I0929 12:12:12.493546  618632 status.go:384] host is not running, skipping remaining checks
	I0929 12:12:12.493564  618632 status.go:176] ha-813438 status: &{Name:ha-813438 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0929 12:12:12.493589  618632 status.go:174] checking status of ha-813438-m02 ...
	I0929 12:12:12.493869  618632 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 12:12:12.493914  618632 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 12:12:12.507359  618632 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42899
	I0929 12:12:12.507887  618632 main.go:141] libmachine: () Calling .GetVersion
	I0929 12:12:12.508348  618632 main.go:141] libmachine: Using API Version  1
	I0929 12:12:12.508375  618632 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 12:12:12.508715  618632 main.go:141] libmachine: () Calling .GetMachineName
	I0929 12:12:12.508930  618632 main.go:141] libmachine: (ha-813438-m02) Calling .GetState
	I0929 12:12:12.510640  618632 status.go:371] ha-813438-m02 host status = "Stopped" (err=<nil>)
	I0929 12:12:12.510654  618632 status.go:384] host is not running, skipping remaining checks
	I0929 12:12:12.510660  618632 status.go:176] ha-813438-m02 status: &{Name:ha-813438-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0929 12:12:12.510679  618632 status.go:174] checking status of ha-813438-m04 ...
	I0929 12:12:12.510989  618632 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 12:12:12.511028  618632 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 12:12:12.524766  618632 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33327
	I0929 12:12:12.525230  618632 main.go:141] libmachine: () Calling .GetVersion
	I0929 12:12:12.525828  618632 main.go:141] libmachine: Using API Version  1
	I0929 12:12:12.525873  618632 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 12:12:12.526319  618632 main.go:141] libmachine: () Calling .GetMachineName
	I0929 12:12:12.526543  618632 main.go:141] libmachine: (ha-813438-m04) Calling .GetState
	I0929 12:12:12.528448  618632 status.go:371] ha-813438-m04 host status = "Stopped" (err=<nil>)
	I0929 12:12:12.528463  618632 status.go:384] host is not running, skipping remaining checks
	I0929 12:12:12.528470  618632 status.go:176] ha-813438-m04 status: &{Name:ha-813438-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (41.00s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (134.04s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-813438 start --wait true --alsologtostderr -v 5 --driver=kvm2  --auto-update-drivers=false
E0929 12:12:45.202280  595293 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/functional-345567/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:13:12.902372  595293 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/functional-345567/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:13:18.897408  595293 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/addons-214441/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-813438 start --wait true --alsologtostderr -v 5 --driver=kvm2  --auto-update-drivers=false: (2m13.196739694s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-813438 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (134.04s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (88.6s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-813438 node add --control-plane --alsologtostderr -v 5
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-813438 node add --control-plane --alsologtostderr -v 5: (1m27.673492375s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-813438 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (88.60s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.94s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.94s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (47.18s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -p image-196808 --driver=kvm2  --auto-update-drivers=false
image_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -p image-196808 --driver=kvm2  --auto-update-drivers=false: (47.180084268s)
--- PASS: TestImageBuild/serial/Setup (47.18s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.6s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-196808
image_test.go:78: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-196808: (1.598261804s)
--- PASS: TestImageBuild/serial/NormalBuild (1.60s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (1.1s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-196808
image_test.go:99: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-196808: (1.099439444s)
--- PASS: TestImageBuild/serial/BuildWithBuildArg (1.10s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.89s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-196808
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.89s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (1.16s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-196808
image_test.go:88: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-196808: (1.157022048s)
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (1.16s)

                                                
                                    
x
+
TestJSONOutput/start/Command (65.95s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-703695 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2  --auto-update-drivers=false
E0929 12:17:45.197336  595293 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/functional-345567/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-703695 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2  --auto-update-drivers=false: (1m5.949698314s)
--- PASS: TestJSONOutput/start/Command (65.95s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.69s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-703695 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.69s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.63s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-703695 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.63s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (14.21s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-703695 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-703695 --output=json --user=testUser: (14.209984803s)
--- PASS: TestJSONOutput/stop/Command (14.21s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.22s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-900812 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-900812 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (67.915742ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"f25db9fe-be15-4bf5-be18-48684ec10911","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-900812] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"8fa1ad60-6376-4327-bf8a-64c9d29326d8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21654"}}
	{"specversion":"1.0","id":"cc53d84f-03b8-4c94-b6ec-fa01a023e2bf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"96cdfde4-0d42-41b2-8a52-33c849835707","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21654-591397/kubeconfig"}}
	{"specversion":"1.0","id":"258aefc6-2b5a-417a-b310-6c8b9a6bdee5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21654-591397/.minikube"}}
	{"specversion":"1.0","id":"cb9770a6-8882-4349-af4a-05fd9d04140f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"75137237-b8e1-4442-a00b-43f8051a8bc1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"28fd6ff9-92b2-4d78-ae2d-2bcb902c144c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-900812" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-900812
--- PASS: TestErrorJSONOutput (0.22s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (94.38s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-260692 --driver=kvm2  --auto-update-drivers=false
E0929 12:18:18.897581  595293 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/addons-214441/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-260692 --driver=kvm2  --auto-update-drivers=false: (45.60199206s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-274294 --driver=kvm2  --auto-update-drivers=false
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-274294 --driver=kvm2  --auto-update-drivers=false: (45.868673588s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-260692
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-274294
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-274294" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-274294
helpers_test.go:175: Cleaning up "first-260692" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-260692
--- PASS: TestMinikubeProfile (94.38s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (24.53s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-007871 --memory=3072 --mount-string /tmp/TestMountStartserial1195787312/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --auto-update-drivers=false
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-007871 --memory=3072 --mount-string /tmp/TestMountStartserial1195787312/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --auto-update-drivers=false: (23.52593408s)
--- PASS: TestMountStart/serial/StartWithMountFirst (24.53s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-007871 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-007871 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.39s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (22.56s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-024248 --memory=3072 --mount-string /tmp/TestMountStartserial1195787312/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --auto-update-drivers=false
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-024248 --memory=3072 --mount-string /tmp/TestMountStartserial1195787312/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --auto-update-drivers=false: (21.55974538s)
--- PASS: TestMountStart/serial/StartWithMountSecond (22.56s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-024248 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-024248 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.38s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.71s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-007871 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.71s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-024248 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-024248 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.38s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.35s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-024248
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-024248: (1.345376339s)
--- PASS: TestMountStart/serial/Stop (1.35s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (21.48s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-024248
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-024248: (20.482713056s)
--- PASS: TestMountStart/serial/RestartStopped (21.48s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-024248 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-024248 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.39s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (117.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-124808 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2  --auto-update-drivers=false
E0929 12:21:21.963252  595293 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/addons-214441/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:22:45.196553  595293 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/functional-345567/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-124808 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2  --auto-update-drivers=false: (1m56.65206105s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-124808 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (117.10s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.82s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-124808 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-124808 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-124808 -- rollout status deployment/busybox: (3.212643473s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-124808 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-124808 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-124808 -- exec busybox-7b57f96db7-lbwcm -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-124808 -- exec busybox-7b57f96db7-t75xj -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-124808 -- exec busybox-7b57f96db7-lbwcm -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-124808 -- exec busybox-7b57f96db7-t75xj -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-124808 -- exec busybox-7b57f96db7-lbwcm -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-124808 -- exec busybox-7b57f96db7-t75xj -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.82s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.85s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-124808 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-124808 -- exec busybox-7b57f96db7-lbwcm -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-124808 -- exec busybox-7b57f96db7-lbwcm -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-124808 -- exec busybox-7b57f96db7-t75xj -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-124808 -- exec busybox-7b57f96db7-t75xj -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.85s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (51.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-124808 -v=5 --alsologtostderr
E0929 12:23:18.895011  595293 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/addons-214441/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-124808 -v=5 --alsologtostderr: (50.652900006s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-124808 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (51.27s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-124808 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.64s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.64s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.63s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-124808 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-124808 cp testdata/cp-test.txt multinode-124808:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-124808 ssh -n multinode-124808 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-124808 cp multinode-124808:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1248888491/001/cp-test_multinode-124808.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-124808 ssh -n multinode-124808 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-124808 cp multinode-124808:/home/docker/cp-test.txt multinode-124808-m02:/home/docker/cp-test_multinode-124808_multinode-124808-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-124808 ssh -n multinode-124808 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-124808 ssh -n multinode-124808-m02 "sudo cat /home/docker/cp-test_multinode-124808_multinode-124808-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-124808 cp multinode-124808:/home/docker/cp-test.txt multinode-124808-m03:/home/docker/cp-test_multinode-124808_multinode-124808-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-124808 ssh -n multinode-124808 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-124808 ssh -n multinode-124808-m03 "sudo cat /home/docker/cp-test_multinode-124808_multinode-124808-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-124808 cp testdata/cp-test.txt multinode-124808-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-124808 ssh -n multinode-124808-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-124808 cp multinode-124808-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1248888491/001/cp-test_multinode-124808-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-124808 ssh -n multinode-124808-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-124808 cp multinode-124808-m02:/home/docker/cp-test.txt multinode-124808:/home/docker/cp-test_multinode-124808-m02_multinode-124808.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-124808 ssh -n multinode-124808-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-124808 ssh -n multinode-124808 "sudo cat /home/docker/cp-test_multinode-124808-m02_multinode-124808.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-124808 cp multinode-124808-m02:/home/docker/cp-test.txt multinode-124808-m03:/home/docker/cp-test_multinode-124808-m02_multinode-124808-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-124808 ssh -n multinode-124808-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-124808 ssh -n multinode-124808-m03 "sudo cat /home/docker/cp-test_multinode-124808-m02_multinode-124808-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-124808 cp testdata/cp-test.txt multinode-124808-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-124808 ssh -n multinode-124808-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-124808 cp multinode-124808-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1248888491/001/cp-test_multinode-124808-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-124808 ssh -n multinode-124808-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-124808 cp multinode-124808-m03:/home/docker/cp-test.txt multinode-124808:/home/docker/cp-test_multinode-124808-m03_multinode-124808.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-124808 ssh -n multinode-124808-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-124808 ssh -n multinode-124808 "sudo cat /home/docker/cp-test_multinode-124808-m03_multinode-124808.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-124808 cp multinode-124808-m03:/home/docker/cp-test.txt multinode-124808-m02:/home/docker/cp-test_multinode-124808-m03_multinode-124808-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-124808 ssh -n multinode-124808-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-124808 ssh -n multinode-124808-m02 "sudo cat /home/docker/cp-test_multinode-124808-m03_multinode-124808-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.63s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.66s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-124808 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-124808 node stop m03: (1.756094397s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-124808 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-124808 status: exit status 7 (451.1829ms)

                                                
                                                
-- stdout --
	multinode-124808
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-124808-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-124808-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-124808 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-124808 status --alsologtostderr: exit status 7 (454.440537ms)

                                                
                                                
-- stdout --
	multinode-124808
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-124808-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-124808-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0929 12:24:07.235927  627529 out.go:360] Setting OutFile to fd 1 ...
	I0929 12:24:07.236023  627529 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 12:24:07.236030  627529 out.go:374] Setting ErrFile to fd 2...
	I0929 12:24:07.236034  627529 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 12:24:07.236271  627529 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21654-591397/.minikube/bin
	I0929 12:24:07.236454  627529 out.go:368] Setting JSON to false
	I0929 12:24:07.236487  627529 mustload.go:65] Loading cluster: multinode-124808
	I0929 12:24:07.236579  627529 notify.go:220] Checking for updates...
	I0929 12:24:07.236925  627529 config.go:182] Loaded profile config "multinode-124808": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0929 12:24:07.236954  627529 status.go:174] checking status of multinode-124808 ...
	I0929 12:24:07.237734  627529 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 12:24:07.237833  627529 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 12:24:07.253997  627529 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43967
	I0929 12:24:07.254573  627529 main.go:141] libmachine: () Calling .GetVersion
	I0929 12:24:07.255286  627529 main.go:141] libmachine: Using API Version  1
	I0929 12:24:07.255326  627529 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 12:24:07.255731  627529 main.go:141] libmachine: () Calling .GetMachineName
	I0929 12:24:07.255984  627529 main.go:141] libmachine: (multinode-124808) Calling .GetState
	I0929 12:24:07.258033  627529 status.go:371] multinode-124808 host status = "Running" (err=<nil>)
	I0929 12:24:07.258053  627529 host.go:66] Checking if "multinode-124808" exists ...
	I0929 12:24:07.258375  627529 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 12:24:07.258443  627529 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 12:24:07.273066  627529 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42797
	I0929 12:24:07.273518  627529 main.go:141] libmachine: () Calling .GetVersion
	I0929 12:24:07.274061  627529 main.go:141] libmachine: Using API Version  1
	I0929 12:24:07.274097  627529 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 12:24:07.274449  627529 main.go:141] libmachine: () Calling .GetMachineName
	I0929 12:24:07.274686  627529 main.go:141] libmachine: (multinode-124808) Calling .GetIP
	I0929 12:24:07.278055  627529 main.go:141] libmachine: (multinode-124808) DBG | domain multinode-124808 has defined MAC address 52:54:00:02:4b:3f in network mk-multinode-124808
	I0929 12:24:07.278579  627529 main.go:141] libmachine: (multinode-124808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:4b:3f", ip: ""} in network mk-multinode-124808: {Iface:virbr1 ExpiryTime:2025-09-29 13:21:18 +0000 UTC Type:0 Mac:52:54:00:02:4b:3f Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:multinode-124808 Clientid:01:52:54:00:02:4b:3f}
	I0929 12:24:07.278643  627529 main.go:141] libmachine: (multinode-124808) DBG | domain multinode-124808 has defined IP address 192.168.39.202 and MAC address 52:54:00:02:4b:3f in network mk-multinode-124808
	I0929 12:24:07.278806  627529 host.go:66] Checking if "multinode-124808" exists ...
	I0929 12:24:07.279189  627529 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 12:24:07.279230  627529 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 12:24:07.294089  627529 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43427
	I0929 12:24:07.294685  627529 main.go:141] libmachine: () Calling .GetVersion
	I0929 12:24:07.295198  627529 main.go:141] libmachine: Using API Version  1
	I0929 12:24:07.295223  627529 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 12:24:07.295554  627529 main.go:141] libmachine: () Calling .GetMachineName
	I0929 12:24:07.295764  627529 main.go:141] libmachine: (multinode-124808) Calling .DriverName
	I0929 12:24:07.295965  627529 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0929 12:24:07.295992  627529 main.go:141] libmachine: (multinode-124808) Calling .GetSSHHostname
	I0929 12:24:07.299088  627529 main.go:141] libmachine: (multinode-124808) DBG | domain multinode-124808 has defined MAC address 52:54:00:02:4b:3f in network mk-multinode-124808
	I0929 12:24:07.299586  627529 main.go:141] libmachine: (multinode-124808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:4b:3f", ip: ""} in network mk-multinode-124808: {Iface:virbr1 ExpiryTime:2025-09-29 13:21:18 +0000 UTC Type:0 Mac:52:54:00:02:4b:3f Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:multinode-124808 Clientid:01:52:54:00:02:4b:3f}
	I0929 12:24:07.299607  627529 main.go:141] libmachine: (multinode-124808) DBG | domain multinode-124808 has defined IP address 192.168.39.202 and MAC address 52:54:00:02:4b:3f in network mk-multinode-124808
	I0929 12:24:07.299801  627529 main.go:141] libmachine: (multinode-124808) Calling .GetSSHPort
	I0929 12:24:07.299969  627529 main.go:141] libmachine: (multinode-124808) Calling .GetSSHKeyPath
	I0929 12:24:07.300129  627529 main.go:141] libmachine: (multinode-124808) Calling .GetSSHUsername
	I0929 12:24:07.300286  627529 sshutil.go:53] new ssh client: &{IP:192.168.39.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21654-591397/.minikube/machines/multinode-124808/id_rsa Username:docker}
	I0929 12:24:07.385811  627529 ssh_runner.go:195] Run: systemctl --version
	I0929 12:24:07.393167  627529 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0929 12:24:07.410672  627529 kubeconfig.go:125] found "multinode-124808" server: "https://192.168.39.202:8443"
	I0929 12:24:07.410725  627529 api_server.go:166] Checking apiserver status ...
	I0929 12:24:07.410776  627529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0929 12:24:07.434444  627529 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2480/cgroup
	W0929 12:24:07.448244  627529 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2480/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0929 12:24:07.448313  627529 ssh_runner.go:195] Run: ls
	I0929 12:24:07.453957  627529 api_server.go:253] Checking apiserver healthz at https://192.168.39.202:8443/healthz ...
	I0929 12:24:07.458794  627529 api_server.go:279] https://192.168.39.202:8443/healthz returned 200:
	ok
	I0929 12:24:07.458822  627529 status.go:463] multinode-124808 apiserver status = Running (err=<nil>)
	I0929 12:24:07.458838  627529 status.go:176] multinode-124808 status: &{Name:multinode-124808 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0929 12:24:07.458880  627529 status.go:174] checking status of multinode-124808-m02 ...
	I0929 12:24:07.459192  627529 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 12:24:07.459243  627529 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 12:24:07.473925  627529 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34713
	I0929 12:24:07.474447  627529 main.go:141] libmachine: () Calling .GetVersion
	I0929 12:24:07.475011  627529 main.go:141] libmachine: Using API Version  1
	I0929 12:24:07.475035  627529 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 12:24:07.475461  627529 main.go:141] libmachine: () Calling .GetMachineName
	I0929 12:24:07.475751  627529 main.go:141] libmachine: (multinode-124808-m02) Calling .GetState
	I0929 12:24:07.478070  627529 status.go:371] multinode-124808-m02 host status = "Running" (err=<nil>)
	I0929 12:24:07.478089  627529 host.go:66] Checking if "multinode-124808-m02" exists ...
	I0929 12:24:07.478457  627529 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 12:24:07.478508  627529 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 12:24:07.492777  627529 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37625
	I0929 12:24:07.493359  627529 main.go:141] libmachine: () Calling .GetVersion
	I0929 12:24:07.493892  627529 main.go:141] libmachine: Using API Version  1
	I0929 12:24:07.493914  627529 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 12:24:07.494330  627529 main.go:141] libmachine: () Calling .GetMachineName
	I0929 12:24:07.494539  627529 main.go:141] libmachine: (multinode-124808-m02) Calling .GetIP
	I0929 12:24:07.497735  627529 main.go:141] libmachine: (multinode-124808-m02) DBG | domain multinode-124808-m02 has defined MAC address 52:54:00:a3:2b:2e in network mk-multinode-124808
	I0929 12:24:07.498163  627529 main.go:141] libmachine: (multinode-124808-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:2b:2e", ip: ""} in network mk-multinode-124808: {Iface:virbr1 ExpiryTime:2025-09-29 13:22:24 +0000 UTC Type:0 Mac:52:54:00:a3:2b:2e Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:multinode-124808-m02 Clientid:01:52:54:00:a3:2b:2e}
	I0929 12:24:07.498194  627529 main.go:141] libmachine: (multinode-124808-m02) DBG | domain multinode-124808-m02 has defined IP address 192.168.39.114 and MAC address 52:54:00:a3:2b:2e in network mk-multinode-124808
	I0929 12:24:07.498352  627529 host.go:66] Checking if "multinode-124808-m02" exists ...
	I0929 12:24:07.498651  627529 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 12:24:07.498698  627529 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 12:24:07.512840  627529 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37159
	I0929 12:24:07.513347  627529 main.go:141] libmachine: () Calling .GetVersion
	I0929 12:24:07.513794  627529 main.go:141] libmachine: Using API Version  1
	I0929 12:24:07.513818  627529 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 12:24:07.514169  627529 main.go:141] libmachine: () Calling .GetMachineName
	I0929 12:24:07.514380  627529 main.go:141] libmachine: (multinode-124808-m02) Calling .DriverName
	I0929 12:24:07.514567  627529 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0929 12:24:07.514598  627529 main.go:141] libmachine: (multinode-124808-m02) Calling .GetSSHHostname
	I0929 12:24:07.517636  627529 main.go:141] libmachine: (multinode-124808-m02) DBG | domain multinode-124808-m02 has defined MAC address 52:54:00:a3:2b:2e in network mk-multinode-124808
	I0929 12:24:07.518123  627529 main.go:141] libmachine: (multinode-124808-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:2b:2e", ip: ""} in network mk-multinode-124808: {Iface:virbr1 ExpiryTime:2025-09-29 13:22:24 +0000 UTC Type:0 Mac:52:54:00:a3:2b:2e Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:multinode-124808-m02 Clientid:01:52:54:00:a3:2b:2e}
	I0929 12:24:07.518163  627529 main.go:141] libmachine: (multinode-124808-m02) DBG | domain multinode-124808-m02 has defined IP address 192.168.39.114 and MAC address 52:54:00:a3:2b:2e in network mk-multinode-124808
	I0929 12:24:07.518326  627529 main.go:141] libmachine: (multinode-124808-m02) Calling .GetSSHPort
	I0929 12:24:07.518489  627529 main.go:141] libmachine: (multinode-124808-m02) Calling .GetSSHKeyPath
	I0929 12:24:07.518656  627529 main.go:141] libmachine: (multinode-124808-m02) Calling .GetSSHUsername
	I0929 12:24:07.518814  627529 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21654-591397/.minikube/machines/multinode-124808-m02/id_rsa Username:docker}
	I0929 12:24:07.600637  627529 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0929 12:24:07.620716  627529 status.go:176] multinode-124808-m02 status: &{Name:multinode-124808-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0929 12:24:07.620765  627529 status.go:174] checking status of multinode-124808-m03 ...
	I0929 12:24:07.621129  627529 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 12:24:07.621187  627529 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 12:24:07.636512  627529 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39969
	I0929 12:24:07.637061  627529 main.go:141] libmachine: () Calling .GetVersion
	I0929 12:24:07.637561  627529 main.go:141] libmachine: Using API Version  1
	I0929 12:24:07.637588  627529 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 12:24:07.638065  627529 main.go:141] libmachine: () Calling .GetMachineName
	I0929 12:24:07.638338  627529 main.go:141] libmachine: (multinode-124808-m03) Calling .GetState
	I0929 12:24:07.640121  627529 status.go:371] multinode-124808-m03 host status = "Stopped" (err=<nil>)
	I0929 12:24:07.640138  627529 status.go:384] host is not running, skipping remaining checks
	I0929 12:24:07.640146  627529 status.go:176] multinode-124808-m03 status: &{Name:multinode-124808-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.66s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (40.79s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-124808 node start m03 -v=5 --alsologtostderr
E0929 12:24:08.264006  595293 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/functional-345567/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-124808 node start m03 -v=5 --alsologtostderr: (40.103289651s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-124808 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (40.79s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (177.75s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-124808
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-124808
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-124808: (27.448253521s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-124808 --wait=true -v=5 --alsologtostderr
E0929 12:27:45.196798  595293 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/functional-345567/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-124808 --wait=true -v=5 --alsologtostderr: (2m30.193671632s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-124808
--- PASS: TestMultiNode/serial/RestartKeepsNodes (177.75s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.41s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-124808 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-124808 node delete m03: (1.823413494s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-124808 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.41s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.92s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-124808 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-124808 stop: (23.746686229s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-124808 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-124808 status: exit status 7 (83.884926ms)

                                                
                                                
-- stdout --
	multinode-124808
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-124808-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-124808 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-124808 status --alsologtostderr: exit status 7 (84.833186ms)

                                                
                                                
-- stdout --
	multinode-124808
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-124808-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0929 12:28:12.456391  629317 out.go:360] Setting OutFile to fd 1 ...
	I0929 12:28:12.456502  629317 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 12:28:12.456507  629317 out.go:374] Setting ErrFile to fd 2...
	I0929 12:28:12.456511  629317 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 12:28:12.456719  629317 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21654-591397/.minikube/bin
	I0929 12:28:12.456902  629317 out.go:368] Setting JSON to false
	I0929 12:28:12.456938  629317 mustload.go:65] Loading cluster: multinode-124808
	I0929 12:28:12.457051  629317 notify.go:220] Checking for updates...
	I0929 12:28:12.457473  629317 config.go:182] Loaded profile config "multinode-124808": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0929 12:28:12.457501  629317 status.go:174] checking status of multinode-124808 ...
	I0929 12:28:12.458004  629317 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 12:28:12.458048  629317 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 12:28:12.472893  629317 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37089
	I0929 12:28:12.473346  629317 main.go:141] libmachine: () Calling .GetVersion
	I0929 12:28:12.474063  629317 main.go:141] libmachine: Using API Version  1
	I0929 12:28:12.474095  629317 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 12:28:12.474513  629317 main.go:141] libmachine: () Calling .GetMachineName
	I0929 12:28:12.474751  629317 main.go:141] libmachine: (multinode-124808) Calling .GetState
	I0929 12:28:12.476549  629317 status.go:371] multinode-124808 host status = "Stopped" (err=<nil>)
	I0929 12:28:12.476566  629317 status.go:384] host is not running, skipping remaining checks
	I0929 12:28:12.476573  629317 status.go:176] multinode-124808 status: &{Name:multinode-124808 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0929 12:28:12.476597  629317 status.go:174] checking status of multinode-124808-m02 ...
	I0929 12:28:12.476931  629317 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0929 12:28:12.476978  629317 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 12:28:12.491300  629317 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43339
	I0929 12:28:12.491758  629317 main.go:141] libmachine: () Calling .GetVersion
	I0929 12:28:12.492208  629317 main.go:141] libmachine: Using API Version  1
	I0929 12:28:12.492232  629317 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 12:28:12.492569  629317 main.go:141] libmachine: () Calling .GetMachineName
	I0929 12:28:12.492787  629317 main.go:141] libmachine: (multinode-124808-m02) Calling .GetState
	I0929 12:28:12.494617  629317 status.go:371] multinode-124808-m02 host status = "Stopped" (err=<nil>)
	I0929 12:28:12.494633  629317 status.go:384] host is not running, skipping remaining checks
	I0929 12:28:12.494638  629317 status.go:176] multinode-124808-m02 status: &{Name:multinode-124808-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.92s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (90.41s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-124808 --wait=true -v=5 --alsologtostderr --driver=kvm2  --auto-update-drivers=false
E0929 12:28:18.897768  595293 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/addons-214441/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-124808 --wait=true -v=5 --alsologtostderr --driver=kvm2  --auto-update-drivers=false: (1m29.804201472s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-124808 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (90.41s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (47.62s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-124808
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-124808-m02 --driver=kvm2  --auto-update-drivers=false
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-124808-m02 --driver=kvm2  --auto-update-drivers=false: exit status 14 (68.503174ms)

                                                
                                                
-- stdout --
	* [multinode-124808-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21654
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21654-591397/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21654-591397/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-124808-m02' is duplicated with machine name 'multinode-124808-m02' in profile 'multinode-124808'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-124808-m03 --driver=kvm2  --auto-update-drivers=false
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-124808-m03 --driver=kvm2  --auto-update-drivers=false: (46.405513373s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-124808
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-124808: exit status 80 (247.612186ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-124808 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-124808-m03 already exists in multinode-124808-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-124808-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (47.62s)

                                                
                                    
x
+
TestPreload (120.05s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-897900 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --auto-update-drivers=false --kubernetes-version=v1.32.0
preload_test.go:43: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-897900 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --auto-update-drivers=false --kubernetes-version=v1.32.0: (1m1.636344384s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-897900 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-amd64 -p test-preload-897900 image pull gcr.io/k8s-minikube/busybox: (1.65059929s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-897900
preload_test.go:57: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-897900: (7.607696294s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-897900 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=kvm2  --auto-update-drivers=false
preload_test.go:65: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-897900 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=kvm2  --auto-update-drivers=false: (48.066507072s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-897900 image list
helpers_test.go:175: Cleaning up "test-preload-897900" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-897900
--- PASS: TestPreload (120.05s)

                                                
                                    
x
+
TestScheduledStopUnix (119.7s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-059696 --memory=3072 --driver=kvm2  --auto-update-drivers=false
E0929 12:32:45.202281  595293 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/functional-345567/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:33:18.895463  595293 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/addons-214441/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-059696 --memory=3072 --driver=kvm2  --auto-update-drivers=false: (47.910946276s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-059696 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-059696 -n scheduled-stop-059696
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-059696 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0929 12:33:20.442422  595293 retry.go:31] will retry after 95.013µs: open /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/scheduled-stop-059696/pid: no such file or directory
I0929 12:33:20.443599  595293 retry.go:31] will retry after 174.368µs: open /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/scheduled-stop-059696/pid: no such file or directory
I0929 12:33:20.444775  595293 retry.go:31] will retry after 260.084µs: open /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/scheduled-stop-059696/pid: no such file or directory
I0929 12:33:20.445945  595293 retry.go:31] will retry after 496.509µs: open /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/scheduled-stop-059696/pid: no such file or directory
I0929 12:33:20.447083  595293 retry.go:31] will retry after 501.906µs: open /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/scheduled-stop-059696/pid: no such file or directory
I0929 12:33:20.448279  595293 retry.go:31] will retry after 554.287µs: open /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/scheduled-stop-059696/pid: no such file or directory
I0929 12:33:20.449435  595293 retry.go:31] will retry after 1.468695ms: open /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/scheduled-stop-059696/pid: no such file or directory
I0929 12:33:20.451687  595293 retry.go:31] will retry after 1.745994ms: open /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/scheduled-stop-059696/pid: no such file or directory
I0929 12:33:20.453918  595293 retry.go:31] will retry after 3.03187ms: open /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/scheduled-stop-059696/pid: no such file or directory
I0929 12:33:20.457207  595293 retry.go:31] will retry after 4.941234ms: open /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/scheduled-stop-059696/pid: no such file or directory
I0929 12:33:20.462494  595293 retry.go:31] will retry after 3.119258ms: open /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/scheduled-stop-059696/pid: no such file or directory
I0929 12:33:20.465711  595293 retry.go:31] will retry after 5.175954ms: open /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/scheduled-stop-059696/pid: no such file or directory
I0929 12:33:20.471996  595293 retry.go:31] will retry after 15.298893ms: open /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/scheduled-stop-059696/pid: no such file or directory
I0929 12:33:20.488390  595293 retry.go:31] will retry after 23.416499ms: open /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/scheduled-stop-059696/pid: no such file or directory
I0929 12:33:20.512695  595293 retry.go:31] will retry after 35.888164ms: open /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/scheduled-stop-059696/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-059696 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-059696 -n scheduled-stop-059696
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-059696
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-059696 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-059696
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-059696: exit status 7 (77.195518ms)

                                                
                                                
-- stdout --
	scheduled-stop-059696
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-059696 -n scheduled-stop-059696
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-059696 -n scheduled-stop-059696: exit status 7 (65.04925ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-059696" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-059696
--- PASS: TestScheduledStopUnix (119.70s)

                                                
                                    
x
+
TestSkaffold (130.07s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /tmp/skaffold.exe2561567503 version
skaffold_test.go:63: skaffold version: v2.16.1
skaffold_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p skaffold-884283 --memory=3072 --driver=kvm2  --auto-update-drivers=false
skaffold_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p skaffold-884283 --memory=3072 --driver=kvm2  --auto-update-drivers=false: (44.147479808s)
skaffold_test.go:86: copying out/minikube-linux-amd64 to /home/jenkins/workspace/KVM_Linux_integration/out/minikube
skaffold_test.go:105: (dbg) Run:  /tmp/skaffold.exe2561567503 run --minikube-profile skaffold-884283 --kube-context skaffold-884283 --status-check=true --port-forward=false --interactive=false
skaffold_test.go:105: (dbg) Done: /tmp/skaffold.exe2561567503 run --minikube-profile skaffold-884283 --kube-context skaffold-884283 --status-check=true --port-forward=false --interactive=false: (1m12.074406473s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:352: "leeroy-app-65947968ff-nbqps" [3d13028e-c186-4a06-b48e-2918c65e7913] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 6.00417656s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:352: "leeroy-web-5cc745bbdf-29sfd" [32a438a1-a331-4cc9-92b5-ca157fc9fc6b] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 6.004792172s
helpers_test.go:175: Cleaning up "skaffold-884283" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p skaffold-884283
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p skaffold-884283: (1.042560254s)
--- PASS: TestSkaffold (130.07s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (160.37s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.906615081 start -p running-upgrade-678218 --memory=3072 --vm-driver=kvm2  --auto-update-drivers=false
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.906615081 start -p running-upgrade-678218 --memory=3072 --vm-driver=kvm2  --auto-update-drivers=false: (1m32.156884412s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-678218 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --auto-update-drivers=false
E0929 12:38:18.894892  595293 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/addons-214441/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-678218 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --auto-update-drivers=false: (1m6.630727853s)
helpers_test.go:175: Cleaning up "running-upgrade-678218" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-678218
--- PASS: TestRunningBinaryUpgrade (160.37s)

                                                
                                    
x
+
TestKubernetesUpgrade (212.84s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-691891 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --auto-update-drivers=false
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-691891 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --auto-update-drivers=false: (1m13.500022445s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-691891
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-691891: (4.16303672s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-691891 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-691891 status --format={{.Host}}: exit status 7 (80.398744ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-691891 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=kvm2  --auto-update-drivers=false
E0929 12:38:01.966037  595293 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/addons-214441/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-691891 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=kvm2  --auto-update-drivers=false: (1m13.803548755s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-691891 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-691891 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --auto-update-drivers=false
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-691891 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --auto-update-drivers=false: exit status 106 (108.749331ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-691891] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21654
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21654-591397/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21654-591397/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.0 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-691891
	    minikube start -p kubernetes-upgrade-691891 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-6918912 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.0, by running:
	    
	    minikube start -p kubernetes-upgrade-691891 --kubernetes-version=v1.34.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-691891 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=kvm2  --auto-update-drivers=false
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-691891 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=kvm2  --auto-update-drivers=false: (1m0.012324783s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-691891" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-691891
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-691891: (1.085238814s)
--- PASS: TestKubernetesUpgrade (212.84s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.19s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.19s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (172.59s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.3080081049 start -p stopped-upgrade-803607 --memory=3072 --vm-driver=kvm2  --auto-update-drivers=false
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.3080081049 start -p stopped-upgrade-803607 --memory=3072 --vm-driver=kvm2  --auto-update-drivers=false: (1m50.302491618s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.3080081049 -p stopped-upgrade-803607 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.3080081049 -p stopped-upgrade-803607 stop: (13.332504047s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-803607 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --auto-update-drivers=false
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-803607 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --auto-update-drivers=false: (48.951608089s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (172.59s)

                                                
                                    
x
+
TestPause/serial/Start (95.22s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-565136 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --auto-update-drivers=false
E0929 12:37:45.197513  595293 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/functional-345567/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-565136 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --auto-update-drivers=false: (1m35.224020842s)
--- PASS: TestPause/serial/Start (95.22s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (76.89s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-565136 --alsologtostderr -v=1 --driver=kvm2  --auto-update-drivers=false
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-565136 --alsologtostderr -v=1 --driver=kvm2  --auto-update-drivers=false: (1m16.851422211s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (76.89s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-221475 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --auto-update-drivers=false
no_kubernetes_test.go:85: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-221475 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --auto-update-drivers=false: exit status 14 (70.902102ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-221475] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21654
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21654-591397/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21654-591397/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (57.02s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:97: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-221475 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --auto-update-drivers=false
no_kubernetes_test.go:97: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-221475 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --auto-update-drivers=false: (56.672072053s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-221475 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (57.02s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.23s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-803607
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-803607: (1.232538518s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.23s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (37.15s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:114: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-221475 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --auto-update-drivers=false
no_kubernetes_test.go:114: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-221475 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --auto-update-drivers=false: (35.879280081s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-221475 status -o json
no_kubernetes_test.go:202: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-221475 status -o json: exit status 2 (275.87165ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-221475","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:126: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-221475
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (37.15s)

                                                
                                    
x
+
TestPause/serial/Pause (0.74s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-565136 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.74s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.31s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-565136 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-565136 --output=json --layout=cluster: exit status 2 (313.591995ms)

                                                
                                                
-- stdout --
	{"Name":"pause-565136","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, istio-operator","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-565136","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.31s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.7s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-565136 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.70s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.88s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-565136 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.88s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (0.92s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-565136 --alsologtostderr -v=5
--- PASS: TestPause/serial/DeletePaused (0.92s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (3.04s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (3.038923133s)
--- PASS: TestPause/serial/VerifyDeletedResources (3.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (57.68s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:138: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-221475 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --auto-update-drivers=false
no_kubernetes_test.go:138: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-221475 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --auto-update-drivers=false: (57.68205696s)
--- PASS: TestNoKubernetes/serial/Start (57.68s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-221475 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-221475 "sudo systemctl is-active --quiet service kubelet": exit status 1 (236.557511ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.24s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (8.95s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:171: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:171: (dbg) Done: out/minikube-linux-amd64 profile list: (5.176971998s)
no_kubernetes_test.go:181: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:181: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (3.775582433s)
--- PASS: TestNoKubernetes/serial/ProfileList (8.95s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.42s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:160: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-221475
no_kubernetes_test.go:160: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-221475: (1.422345636s)
--- PASS: TestNoKubernetes/serial/Stop (1.42s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (48.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:193: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-221475 --driver=kvm2  --auto-update-drivers=false
E0929 12:42:09.802629  595293 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/skaffold-884283/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:193: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-221475 --driver=kvm2  --auto-update-drivers=false: (48.042037318s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (48.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.23s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-221475 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-221475 "sudo systemctl is-active --quiet service kubelet": exit status 1 (234.058381ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (82.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-702852 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --auto-update-drivers=false
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-702852 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --auto-update-drivers=false: (1m22.77630772s)
--- PASS: TestNetworkPlugins/group/auto/Start (82.78s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (113.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-702852 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --auto-update-drivers=false
E0929 12:43:18.895914  595293 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/addons-214441/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-702852 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --auto-update-drivers=false: (1m53.134581684s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (113.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (93.92s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-702852 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --auto-update-drivers=false
E0929 12:44:12.687374  595293 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/skaffold-884283/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-702852 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --auto-update-drivers=false: (1m33.91865687s)
--- PASS: TestNetworkPlugins/group/calico/Start (93.92s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-702852 "pgrep -a kubelet"
I0929 12:44:16.929519  595293 config.go:182] Loaded profile config "auto-702852": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (12.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-702852 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-fwsct" [d8eecbf0-7704-4151-bb4e-632475ab7bb2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-fwsct" [d8eecbf0-7704-4151-bb4e-632475ab7bb2] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 12.004543731s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (12.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-702852 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-702852 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-702852 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (69.68s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-702852 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --auto-update-drivers=false
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-702852 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --auto-update-drivers=false: (1m9.680231413s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (69.68s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-nhqqh" [08cf4593-2d18-4466-8a58-5df625a69da2] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.006296544s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-702852 "pgrep -a kubelet"
I0929 12:44:59.706344  595293 config.go:182] Loaded profile config "kindnet-702852": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-702852 replace --force -f testdata/netcat-deployment.yaml
I0929 12:45:00.147982  595293 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-xz5lv" [956bfc2b-bc4a-4d1b-b029-3b54078b30ec] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-xz5lv" [956bfc2b-bc4a-4d1b-b029-3b54078b30ec] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.00701797s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-702852 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-702852 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-702852 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-887xz" [0e570691-293e-48d2-a5f5-3ff9b4c16812] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
helpers_test.go:352: "calico-node-887xz" [0e570691-293e-48d2-a5f5-3ff9b4c16812] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004825221s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-702852 "pgrep -a kubelet"
I0929 12:45:23.452949  595293 config.go:182] Loaded profile config "calico-702852": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-702852 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-2w2kh" [d0159f2b-d256-4a2c-83d4-536820c32483] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-2w2kh" [d0159f2b-d256-4a2c-83d4-536820c32483] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.126270194s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (75.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p false-702852 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=kvm2  --auto-update-drivers=false
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p false-702852 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=kvm2  --auto-update-drivers=false: (1m15.505641102s)
--- PASS: TestNetworkPlugins/group/false/Start (75.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-702852 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-702852 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-702852 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (64.92s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-702852 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --auto-update-drivers=false
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-702852 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --auto-update-drivers=false: (1m4.922404665s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (64.92s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-702852 "pgrep -a kubelet"
I0929 12:45:56.289682  595293 config.go:182] Loaded profile config "custom-flannel-702852": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (12.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-702852 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-7c4kz" [6c106f67-dfc4-40a4-a356-5cd92c8e2b16] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-7c4kz" [6c106f67-dfc4-40a4-a356-5cd92c8e2b16] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 12.00512663s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (12.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-702852 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-702852 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-702852 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (70.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-702852 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --auto-update-drivers=false
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-702852 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --auto-update-drivers=false: (1m10.430577407s)
--- PASS: TestNetworkPlugins/group/flannel/Start (70.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (92.92s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-702852 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --auto-update-drivers=false
E0929 12:46:31.431631  595293 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/gvisor-756631/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-702852 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --auto-update-drivers=false: (1m32.922292131s)
--- PASS: TestNetworkPlugins/group/bridge/Start (92.92s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p false-702852 "pgrep -a kubelet"
I0929 12:46:44.585693  595293 config.go:182] Loaded profile config "false-702852": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (11.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-702852 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-czqb2" [e76a74dc-9021-4873-a780-aac4904072be] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-czqb2" [e76a74dc-9021-4873-a780-aac4904072be] Running
E0929 12:46:51.913732  595293 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/gvisor-756631/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 11.006882433s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (11.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-702852 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-702852 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-702852 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
E0929 12:46:56.529414  595293 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/skaffold-884283/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-702852 "pgrep -a kubelet"
I0929 12:47:00.166779  595293 config.go:182] Loaded profile config "enable-default-cni-702852": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-702852 replace --force -f testdata/netcat-deployment.yaml
I0929 12:47:00.636724  595293 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0
I0929 12:47:00.660636  595293 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-qmzfs" [d45155b1-67d9-4de0-a55c-37954fd9fb30] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-qmzfs" [d45155b1-67d9-4de0-a55c-37954fd9fb30] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.00569745s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-702852 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-702852 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-702852 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (95.07s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kubenet-702852 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=kvm2  --auto-update-drivers=false
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kubenet-702852 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=kvm2  --auto-update-drivers=false: (1m35.074805269s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (95.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (76.94s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-275517 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --auto-update-drivers=false --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-275517 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --auto-update-drivers=false --kubernetes-version=v1.28.0: (1m16.942754254s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (76.94s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-xn5qg" [c564cf9a-7c55-41a1-9152-615c94e08cbd] Running
E0929 12:47:32.875374  595293 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/gvisor-756631/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.00731509s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-702852 "pgrep -a kubelet"
I0929 12:47:38.287784  595293 config.go:182] Loaded profile config "flannel-702852": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (12.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-702852 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-xcxgs" [f854d75e-835c-4368-b9fa-b6f0e4592f7c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-xcxgs" [f854d75e-835c-4368-b9fa-b6f0e4592f7c] Running
E0929 12:47:45.197330  595293 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/functional-345567/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 12.004364571s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (12.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-702852 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-702852 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-702852 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-702852 "pgrep -a kubelet"
I0929 12:48:02.356736  595293 config.go:182] Loaded profile config "bridge-702852": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-702852 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-hjnmn" [968eca39-4786-4ca6-a94f-5ce1c5d0a681] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-hjnmn" [968eca39-4786-4ca6-a94f-5ce1c5d0a681] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.005758777s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.32s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (73.7s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-438773 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --auto-update-drivers=false --kubernetes-version=v1.34.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-438773 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --auto-update-drivers=false --kubernetes-version=v1.34.0: (1m13.698761338s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (73.70s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-702852 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-702852 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-702852 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (76.16s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-122886 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --auto-update-drivers=false --kubernetes-version=v1.34.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-122886 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --auto-update-drivers=false --kubernetes-version=v1.34.0: (1m16.160194477s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (76.16s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.39s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-275517 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [71016ff2-aaf9-4a74-affd-34b21907f788] Pending
helpers_test.go:352: "busybox" [71016ff2-aaf9-4a74-affd-34b21907f788] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
I0929 12:48:49.054615  595293 config.go:182] Loaded profile config "kubenet-702852": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.0
helpers_test.go:352: "busybox" [71016ff2-aaf9-4a74-affd-34b21907f788] Running
E0929 12:48:54.796850  595293 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/gvisor-756631/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.005420967s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-275517 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kubenet-702852 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (12.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-702852 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-vz4jw" [2654d383-3657-4372-b2b8-1adbd3706e37] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-vz4jw" [2654d383-3657-4372-b2b8-1adbd3706e37] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 12.004901115s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (12.40s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.36s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-275517 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-275517 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.264824363s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-275517 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.36s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (14.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-275517 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-275517 --alsologtostderr -v=3: (14.122385157s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (14.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-702852 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-702852 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-702852 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.17s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-275517 -n old-k8s-version-275517
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-275517 -n old-k8s-version-275517: exit status 7 (83.315112ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-275517 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (45.89s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-275517 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --auto-update-drivers=false --kubernetes-version=v1.28.0
E0929 12:49:17.211466  595293 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/auto-702852/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:49:17.217936  595293 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/auto-702852/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:49:17.229476  595293 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/auto-702852/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:49:17.251004  595293 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/auto-702852/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:49:17.292621  595293 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/auto-702852/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:49:17.374091  595293 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/auto-702852/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:49:17.535872  595293 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/auto-702852/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:49:17.857182  595293 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/auto-702852/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-275517 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --auto-update-drivers=false --kubernetes-version=v1.28.0: (45.456224701s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-275517 -n old-k8s-version-275517
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (45.89s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (75s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-865689 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --auto-update-drivers=false --kubernetes-version=v1.34.0
E0929 12:49:19.781948  595293 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/auto-702852/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:49:22.343855  595293 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/auto-702852/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-865689 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --auto-update-drivers=false --kubernetes-version=v1.34.0: (1m14.997273309s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (75.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.41s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-438773 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [b432f6dc-0cd2-4a4c-ae46-c8844ab0f81e] Pending
helpers_test.go:352: "busybox" [b432f6dc-0cd2-4a4c-ae46-c8844ab0f81e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0929 12:49:27.465927  595293 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/auto-702852/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "busybox" [b432f6dc-0cd2-4a4c-ae46-c8844ab0f81e] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.006404507s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-438773 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.41s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.33s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-438773 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-438773 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.210236708s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-438773 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.33s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (14.05s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-438773 --alsologtostderr -v=3
E0929 12:49:37.708202  595293 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/auto-702852/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-438773 --alsologtostderr -v=3: (14.054368832s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (14.05s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.35s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-122886 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [69efd905-197e-494f-9c1c-198603043401] Pending
helpers_test.go:352: "busybox" [69efd905-197e-494f-9c1c-198603043401] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [69efd905-197e-494f-9c1c-198603043401] Running
E0929 12:49:53.429630  595293 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/kindnet-702852/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:49:53.436032  595293 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/kindnet-702852/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:49:53.447435  595293 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/kindnet-702852/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:49:53.468922  595293 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/kindnet-702852/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:49:53.510474  595293 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/kindnet-702852/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:49:53.592392  595293 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/kindnet-702852/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:49:53.754318  595293 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/kindnet-702852/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:49:54.076662  595293 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/kindnet-702852/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:49:54.718260  595293 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/kindnet-702852/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:49:56.000258  595293 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/kindnet-702852/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.006546213s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-122886 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.35s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-438773 -n no-preload-438773
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-438773 -n no-preload-438773: exit status 7 (99.987679ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-438773 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.29s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (53.51s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-438773 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --auto-update-drivers=false --kubernetes-version=v1.34.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-438773 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --auto-update-drivers=false --kubernetes-version=v1.34.0: (53.12422s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-438773 -n no-preload-438773
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (53.51s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.3s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-122886 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-122886 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.18733976s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-122886 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.30s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.42s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-122886 --alsologtostderr -v=3
E0929 12:49:58.189631  595293 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/auto-702852/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:49:58.561663  595293 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/kindnet-702852/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-122886 --alsologtostderr -v=3: (12.422619434s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.42s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (14.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-5fd7h" [d471c596-a9d8-4333-997d-5e4a7eba2119] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E0929 12:50:03.683685  595293 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/kindnet-702852/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-5fd7h" [d471c596-a9d8-4333-997d-5e4a7eba2119] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 14.006343228s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (14.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-122886 -n embed-certs-122886
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-122886 -n embed-certs-122886: exit status 7 (75.257878ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-122886 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (53.32s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-122886 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --auto-update-drivers=false --kubernetes-version=v1.34.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-122886 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --auto-update-drivers=false --kubernetes-version=v1.34.0: (52.896620692s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-122886 -n embed-certs-122886
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (53.32s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-5fd7h" [d471c596-a9d8-4333-997d-5e4a7eba2119] Running
E0929 12:50:13.925278  595293 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/kindnet-702852/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:50:17.199951  595293 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/calico-702852/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:50:17.206341  595293 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/calico-702852/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:50:17.217964  595293 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/calico-702852/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:50:17.239596  595293 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/calico-702852/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:50:17.281075  595293 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/calico-702852/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:50:17.362612  595293 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/calico-702852/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:50:17.524226  595293 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/calico-702852/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:50:17.846519  595293 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/calico-702852/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:50:18.488286  595293 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/calico-702852/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005071141s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-275517 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-275517 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.73s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-275517 --alsologtostderr -v=1
E0929 12:50:19.770694  595293 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/calico-702852/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p old-k8s-version-275517 --alsologtostderr -v=1: (1.095960931s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-275517 -n old-k8s-version-275517
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-275517 -n old-k8s-version-275517: exit status 2 (340.173466ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-275517 -n old-k8s-version-275517
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-275517 -n old-k8s-version-275517: exit status 2 (312.438915ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-275517 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-275517 -n old-k8s-version-275517
E0929 12:50:22.332828  595293 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/calico-702852/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-275517 -n old-k8s-version-275517
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.73s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (71.52s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-227448 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --auto-update-drivers=false --kubernetes-version=v1.34.0
E0929 12:50:27.454684  595293 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/calico-702852/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:50:34.406984  595293 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/kindnet-702852/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-227448 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --auto-update-drivers=false --kubernetes-version=v1.34.0: (1m11.520281613s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (71.52s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.42s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-865689 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [4d8092a7-0415-49ca-8e86-fe4460004078] Pending
helpers_test.go:352: "busybox" [4d8092a7-0415-49ca-8e86-fe4460004078] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0929 12:50:37.696173  595293 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/calico-702852/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "busybox" [4d8092a7-0415-49ca-8e86-fe4460004078] Running
E0929 12:50:39.151766  595293 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/auto-702852/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.005882567s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-865689 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.42s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (10.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-nzmwk" [0dbd9177-6a84-4930-ac47-75b58a0e857a] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-nzmwk" [0dbd9177-6a84-4930-ac47-75b58a0e857a] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 10.005100658s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (10.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-865689 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-865689 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.16938816s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-865689 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (13.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-865689 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-865689 --alsologtostderr -v=3: (13.031419758s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (13.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-nzmwk" [0dbd9177-6a84-4930-ac47-75b58a0e857a] Running
E0929 12:50:56.563146  595293 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/custom-flannel-702852/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:50:56.569662  595293 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/custom-flannel-702852/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:50:56.581201  595293 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/custom-flannel-702852/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:50:56.602717  595293 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/custom-flannel-702852/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:50:56.644303  595293 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/custom-flannel-702852/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:50:56.726081  595293 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/custom-flannel-702852/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:50:56.887765  595293 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/custom-flannel-702852/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:50:57.209163  595293 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/custom-flannel-702852/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:50:57.851382  595293 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/custom-flannel-702852/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00495143s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-438773 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-438773 image list --format=json
E0929 12:50:58.177564  595293 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/calico-702852/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.42s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-438773 --alsologtostderr -v=1
E0929 12:50:59.133774  595293 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/custom-flannel-702852/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-438773 -n no-preload-438773
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-438773 -n no-preload-438773: exit status 2 (300.083317ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-438773 -n no-preload-438773
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-438773 -n no-preload-438773: exit status 2 (333.935702ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-438773 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-438773 -n no-preload-438773
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-438773 -n no-preload-438773
E0929 12:51:01.695132  595293 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/custom-flannel-702852/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.42s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-865689 -n default-k8s-diff-port-865689
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-865689 -n default-k8s-diff-port-865689: exit status 7 (84.768735ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-865689 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (51.29s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-865689 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --auto-update-drivers=false --kubernetes-version=v1.34.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-865689 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --auto-update-drivers=false --kubernetes-version=v1.34.0: (50.986386261s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-865689 -n default-k8s-diff-port-865689
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (51.29s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (8.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-gmv6t" [b55f4491-961a-4160-b61a-34b10a33c985] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-gmv6t" [b55f4491-961a-4160-b61a-34b10a33c985] Running
E0929 12:51:06.817404  595293 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/custom-flannel-702852/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:51:10.936129  595293 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/gvisor-756631/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 8.006220133s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (8.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-gmv6t" [b55f4491-961a-4160-b61a-34b10a33c985] Running
E0929 12:51:15.369434  595293 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/kindnet-702852/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004315232s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-122886 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-122886 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-122886 --alsologtostderr -v=1
E0929 12:51:17.059753  595293 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/custom-flannel-702852/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-122886 -n embed-certs-122886
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-122886 -n embed-certs-122886: exit status 2 (269.292436ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-122886 -n embed-certs-122886
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-122886 -n embed-certs-122886: exit status 2 (287.852228ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-122886 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-122886 -n embed-certs-122886
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-122886 -n embed-certs-122886
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.97s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-227448 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.97s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (13.38s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-227448 --alsologtostderr -v=3
E0929 12:51:37.541846  595293 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/custom-flannel-702852/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:51:38.638543  595293 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/gvisor-756631/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:51:39.139238  595293 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/calico-702852/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:51:44.882009  595293 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/false-702852/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:51:44.888457  595293 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/false-702852/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:51:44.899945  595293 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/false-702852/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:51:44.921460  595293 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/false-702852/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:51:44.962988  595293 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/false-702852/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:51:45.044522  595293 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/false-702852/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:51:45.206340  595293 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/false-702852/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:51:45.528041  595293 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/false-702852/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:51:46.169775  595293 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/false-702852/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:51:47.451612  595293 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/false-702852/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-227448 --alsologtostderr -v=3: (13.380159264s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (13.38s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-227448 -n newest-cni-227448
E0929 12:51:50.013952  595293 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/false-702852/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-227448 -n newest-cni-227448: exit status 7 (78.479065ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-227448 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (34.43s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-227448 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --auto-update-drivers=false --kubernetes-version=v1.34.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-227448 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --auto-update-drivers=false --kubernetes-version=v1.34.0: (34.146689628s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-227448 -n newest-cni-227448
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (34.43s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-z6msd" [814bd55c-bd9b-4e90-96dd-9c719ffb9484] Running
E0929 12:51:55.136337  595293 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/false-702852/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005315535s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-z6msd" [814bd55c-bd9b-4e90-96dd-9c719ffb9484] Running
E0929 12:52:00.607360  595293 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/enable-default-cni-702852/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:52:00.613827  595293 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/enable-default-cni-702852/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:52:00.625343  595293 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/enable-default-cni-702852/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:52:00.646878  595293 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/enable-default-cni-702852/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:52:00.688426  595293 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/enable-default-cni-702852/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:52:00.769986  595293 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/enable-default-cni-702852/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:52:00.931705  595293 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/enable-default-cni-702852/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:52:01.073403  595293 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/auto-702852/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:52:01.254085  595293 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/enable-default-cni-702852/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:52:01.896401  595293 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/enable-default-cni-702852/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004252538s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-865689 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-865689 image list --format=json
E0929 12:52:03.177815  595293 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/enable-default-cni-702852/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.79s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-865689 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-865689 -n default-k8s-diff-port-865689
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-865689 -n default-k8s-diff-port-865689: exit status 2 (279.368463ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-865689 -n default-k8s-diff-port-865689
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-865689 -n default-k8s-diff-port-865689: exit status 2 (295.650209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-865689 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-865689 -n default-k8s-diff-port-865689
E0929 12:52:05.378419  595293 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/false-702852/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:52:05.739320  595293 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/enable-default-cni-702852/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-865689 -n default-k8s-diff-port-865689
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.79s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-227448 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.64s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-227448 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-227448 -n newest-cni-227448
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-227448 -n newest-cni-227448: exit status 2 (258.369235ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-227448 -n newest-cni-227448
E0929 12:52:25.860168  595293 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21654-591397/.minikube/profiles/false-702852/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-227448 -n newest-cni-227448: exit status 2 (268.020128ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-227448 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-227448 -n newest-cni-227448
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-227448 -n newest-cni-227448
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.64s)

                                                
                                    

Test skip (34/345)

Order skiped test Duration
5 TestDownloadOnly/v1.28.0/cached-images 0
6 TestDownloadOnly/v1.28.0/binaries 0
7 TestDownloadOnly/v1.28.0/kubectl 0
14 TestDownloadOnly/v1.34.0/cached-images 0
15 TestDownloadOnly/v1.34.0/binaries 0
16 TestDownloadOnly/v1.34.0/kubectl 0
20 TestDownloadOnlyKic 0
33 TestAddons/serial/GCPAuth/RealCredentials 0
40 TestAddons/parallel/Olm 0
47 TestAddons/parallel/AmdGpuDevicePlugin 0
54 TestDockerEnvContainerd 0
56 TestHyperKitDriverInstallOrUpdate 0
57 TestHyperkitDriverSkipUpgrade 0
109 TestFunctional/parallel/PodmanEnv 0
121 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
122 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
123 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
124 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
125 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
126 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
127 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
128 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
158 TestFunctionalNewestKubernetes 0
188 TestImageBuild/serial/validateImageBuildWithBuildEnv 0
215 TestKicCustomNetwork 0
216 TestKicExistingNetwork 0
217 TestKicCustomSubnet 0
218 TestKicStaticIP 0
250 TestChangeNoneUser 0
253 TestScheduledStopWindows 0
257 TestInsufficientStorage 0
261 TestMissingContainerUpgrade 0
272 TestNetworkPlugins/group/cilium 4.02
283 TestStartStop/group/disable-driver-mounts 0.2
x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:759: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1033: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:114: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:178: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:636: 
----------------------- debugLogs start: cilium-702852 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-702852

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-702852

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-702852

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-702852

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-702852

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-702852

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-702852

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-702852

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-702852

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-702852

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-702852" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-702852"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-702852" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-702852"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-702852" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-702852"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-702852

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-702852" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-702852"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-702852" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-702852"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-702852" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-702852" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-702852" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-702852" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-702852" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-702852" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-702852" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-702852" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-702852" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-702852"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-702852" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-702852"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-702852" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-702852"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-702852" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-702852"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-702852" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-702852"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-702852

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-702852

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-702852" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-702852" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-702852

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-702852

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-702852" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-702852" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-702852" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-702852" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-702852" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-702852" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-702852"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-702852" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-702852"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-702852" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-702852"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-702852" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-702852"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-702852" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-702852"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-702852

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-702852" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-702852"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-702852" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-702852"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-702852" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-702852"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-702852" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-702852"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-702852" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-702852"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-702852" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-702852"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-702852" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-702852"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-702852" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-702852"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-702852" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-702852"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-702852" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-702852"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-702852" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-702852"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-702852" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-702852"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-702852" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-702852"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-702852" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-702852"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-702852" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-702852"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-702852" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-702852"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-702852" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-702852"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-702852" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-702852"

                                                
                                                
----------------------- debugLogs end: cilium-702852 [took: 3.845607694s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-702852" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-702852
--- SKIP: TestNetworkPlugins/group/cilium (4.02s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-541186" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-541186
--- SKIP: TestStartStop/group/disable-driver-mounts (0.20s)

                                                
                                    
Copied to clipboard